top | item 20013533

Bonini's Paradox

88 points| hhs | 6 years ago |en.wikipedia.org | reply

52 comments

order
[+] notafraudster|6 years ago|reply
I think the George Box aphorism linked at the bottom ("All models are wrong [but some are useful]") is closer to the right way to think about this.

With complexity comes additional explanatory power. Something with perfect explanatory power has infinite complexity. But the tradeoff is not linear in most problem domains; we can first add those concepts to our model that maximize the explanatory power relative to the complexity they introduce.

And much of the world, physical and social, can be explained in fairly simple models, which is excellent. For things that are less well captured with simple models, or for which precision is so important we wish to shrink the error term further, then great, pile on more complexity progressively.

[+] hprotagonist|6 years ago|reply
“a model that completely describes the system is about as useful as a map at 1:1 scale” — a quote from my advisor in grad school.
[+] na85|6 years ago|reply
One of my profs was fond of the saying "with enough money, anyone can design a bridge that will stand up. Only an engineer can design a bridge that will just stay up".

I think the same is true of simulation and modeling: it requires a domain expert to include the useful stuff in your model while skirting or abstracting away the extraneous bits.

[+] jobigoud|6 years ago|reply
Well a 1:1 map is super useful. This is another use-case of VR. You can visit appartments or wander the World in Google Earth. I also used this recently for renovation work, my wife created the model of how the studio would look in the future, I imported it in Unity and we visited it in VR before starting the groundwork.

The trick is that the map itself is 1:1 but your moving around in the map is not constrained by the same limitations as in the real world.

[+] ysleepy|6 years ago|reply
I would also say, that a "complete model" is not a model anymore, but a copy of the thing itself.

A model is always an incomplete simulation of something, keeping some relevant aspect but discarding less relevant ones while remaining useful.

[+] mannykannot|6 years ago|reply
> This paradox may be used by researchers to explain why complete models of the human brain and thinking processes have not been created and will undoubtedly remain difficult for years to come.

This assertion would only be justified if we knew enough to create complex models that we do not understand. The reality is that we don't know enough yet to create any sort of working model of the whole human brain and its thinking processes.

[+] hprotagonist|6 years ago|reply
Essential nonlinearities are rife in the little of neurophysiology that we've encountered.

I am more or less sold on (possibly deterministic) chaotic models for things like tinnitus.

That's close enough to "a complex model we cannot understand" for me.

[+] hhs|6 years ago|reply
Interesting view. You write: “The reality is that we don't know enough yet to create any sort of working model of the whole human brain and its thinking processes.”

To me, this sentence seems to mean that it may be possible some time down the road to “know enough” and create a complex model of the human brain that’s understandable. There are also some in philosophy who argue that humans may be cognitively closed off to understanding things like the human brain [0]. They note that humans have biological limits to understanding certain things just like an animal might. For instance, an ant, may have biological limits to its understanding of how certain things work.

What do you think of philosophers who argue of cognitive closure?

[0]: https://en.m.wikipedia.org/wiki/Cognitive_closure_(philosoph....

[+] simonh|6 years ago|reply
I think that's just a restatement of the point. In order to understand, and therefore create a model, you need to have a complete and thorough understanding of the original, so in terms of comprehension of the problem domain the model doesn't help much.
[+] gouh|6 years ago|reply
We don't necessarily create complex simulations to understand them, but to have those simulation available to then build for e.g reinforcement learning models.

To take a simple example, if you have a high-school wagon-rolling-down-a-slope Newton Physic exercise, then you can capture in your mind everything which explains the problem. But then take https://youtu.be/a3jfyJ9JVeM , it's still Newton, but so many constrains that we cannot picture in our mind everything and we are not even solving the problem with a close form equation anymore. But we have modeled a simulation which enable us to make experiments and build locomotion algorithm. Actually, you don't even need to understand Newton or Physics to implement that paper, you can just see it as optimizing a black box.

In the future I could see that happen at larger scale. For eg we put everything we known about the brain or cells in a simulation, we can't picture what's really going on because the system is too complex, yet we can run simulations and optimize models to find for eg new cures, treatments, etc.

[+] louis8799|6 years ago|reply
A realistic model is hard for human to understand, but it didn't account for the case that human can leverage computing power to simulate and machine learn whatever is "realistic".
[+] falcor84|6 years ago|reply
This is an interesting point - if I create a fully accurate model that allows me to quickly simulate the outcome of any input, then that becomes an oracle that I can query to better understand the problem. But this then leads me to think that my understanding would then be a simpler mental model of what the simulation does, which brings me back to the original paradox.
[+] neokantian|6 years ago|reply
Abstraction, i.e. modelling, means leaving out details. Understanding then becomes knowing what details you can leave out for the given purpose. In this context, it is essential to remark that scientific abstractions, i.e. physical-world, empirical theories, need to be tested experimentally. Otherwise, they will simply lack legitimacy.
[+] hotBacteria|6 years ago|reply
Two points not mentionned:

1- The question of access: A theorical exhaustive map of the brain would be easier to access than an actual brain.

2- Tools: A map is not a piece of paper anymore. A 1:1 map of Earth doesn't seem absurd, as long as we can navigate it with levels of details

[+] mrob|6 years ago|reply
A cycle accurate emulator is a 1:1 model of the observed behavior of the original hardware under normal operating conditions (it doesn't model probing it with an oscilloscope, or running it at the the wrong voltage or temperature etc.). It's more useful than the original hardware because its state can be saved and restored and examined, and you can set breakpoints and watchpoints, and easily interface it with other software, and it can run at different speeds to the original.
[+] sytelus|6 years ago|reply
You are confusing between useful systems vs explainable systems vs understandable systems. All models, 1:1 or not are useful in some sense. You might also be able to explain some event or behaviour of the system using a model. However, understanding requires building abstractions graph that can allow an arbitrary chain of reasoning and inference. As system becomes complex, such abstraction graph might result in similar complexity resulting in inability of a human to store in memory and execute inference/reasoning over it.
[+] d--b|6 years ago|reply
The Paul Valéry statement’s translation is fairly poor. It sounds more like:

“What is simple is always wrong. What is not is unusable”

[+] AstralStorm|6 years ago|reply
And the most important bit is left out - you have to know when and how the model is inaccurate or even the simple one is useless.

Explanatory power is limited by explanation of errors.

[+] dearrifling|6 years ago|reply
Having a complex model that describes a complex system is not unusable though. There is a reason we want to accurately simulate the Navier-Stokes equations with complex boundary conditions.
[+] phoe-krk|6 years ago|reply
To quote the original article: "As a model of a complex system becomes more complete, it becomes less understandable. Alternatively, as a model grows more realistic, it also becomes just as difficult to understand as the real-world processes it represents"

If your model is simple enough to represent the original system accurately, then the original system is not complex.

[+] meuk|6 years ago|reply
This seems to be more of a tautology than a paradox.
[+] OrgNet|6 years ago|reply
dang, thanks for stating the obvious professor Charles Bonini.
[+] lanevorockz|6 years ago|reply
I keep on saying this about the sociology research. They keep on using single variables to track complex systems and they end up failing. All of the lazy social justice fields end up causing immense damage.