top | item 34601012

Critical brain hypothesis: A physical theory for when the brain performs best

166 points| blegh | 3 years ago |quantamagazine.org

42 comments

order
[+] amelius|3 years ago|reply
"The critical brain hypothesis suggests that neural networks do their best work when connections are not too weak or too strong."

Isn't this just about as obvious as the fact that traffic flows best when traffic lights are neither always red nor always green?

[+] evanb|3 years ago|reply
"Critical" has a precise meaning in these kinds of systems; it essentially means when correlation lengths diverge (or, with a finite brain, become the size of the whole). In physical systems this happens at 2nd order phase transitions. Unfortunately most familiar phase transitions are first-order (boiling and freezing, for example) but the development of macroscopic magnetism as iron cools is an example.

Away from the critical point the dynamics become either 1) too strong, meaning that information has a hard time getting from one spot to another because it's in too much traffic 2) too weak, meaning that information has trouble being processed, because the traffic is so light that cars don't spend enough time together to come to equilibrium

In these cases the correlations will be short-distance, in contrast to critical.

[+] 0xcafefood|3 years ago|reply
"The critical brain hypothesis suggests that neural networks do their best work when connections are not too weak or too strong." is actually a tautology, no?

Without a crisp explanation for what "too weak" or "too strong" mean, this is just saying "Neural networks work best when connections couldn't be changed to make them work better."

[+] karmakurtisaani|3 years ago|reply
I had a similar issue with the article. Essentially the information content seems to boil down to "there is a state where the brain works the best". For experts there is probably a lot to learn from the technicalities of this research, but the article leaves a layman a bit cold.
[+] clnq|3 years ago|reply
As far as I understood, they are talking about a certain homeostatic state of the brain which is optimal for the right amount of signal transmission between neurons. If a set of neurons are on the critical point, they neither fizzle out the signal nor amplify it.

I think this is somewhat related to criticality in nuclear fission in nuclear reactors - if a reactor is sub-critical, then any reaction fizzles out; if it is super-critical, then the reaction is exponential. If the reactor is close to the critical point, then the reaction is sustained at a relatively constant rate.

The brain probably couldn't function very well if their neurons were generally super-critical and all fire very quickly if one fires. And the brain probably wouldn't function well if the neurons were all sub-critical, and any attempt at transmitting a signal would die out.

The researchers studying this critical point theory for the brain also seem to be seeing parallels in brain activity and other critical systems. Particularly, that a system at a critical point can be easily tipped over the critical point with a small input or simply a change in one of its components. And that some systems in nature seem to use this to stay organised.

The hypothesis is very cool and seems to be sensible, but the article and video depart a bit too much into adjacent topics to offer a good introduction to the subject. For example, I couldn't find the relevance of the critical point between random and organised systems in metal atoms. The example of seeing predators in nature was also a bit contrived although it illustrates how a small change in a large neural net can produce large meaningful organisation in the activation of neurons.

It feels very intuitive that a brain needs to operate at this perfect balance between super-criticality and sub-criticality, I can understand why some scientists feel so strongly about it.

I would welcome corrections if I did not understand something as intended.

[+] anonymousDan|3 years ago|reply
To me the fact that more information is transmitted with an intermediate number of connections than with a strongly connected network wasn't immediately obvious at first glance. I guess there is a link to entropy, i.e. how surprised can you be by the information received at one end of the network given its connectivity.
[+] mrinfinite|3 years ago|reply
reminds me of definitions of getting into a flow while performing a task: the task shouldn't be so hard that it feels impossible, nor so easy as not worth any mental effort and tedious.

The man compared it to playing chess with someone with your rating or a rating slightly up or down from it. Much more engaging than playing a CHESS GOD, or a totally first time player.

[+] asplake|3 years ago|reply
That makes it sound like optimising. To my not very great understanding, I think it’s more like keeping things right on the edge
[+] actually_a_dog|3 years ago|reply
Not really. Signals have a finite power level. If you open all the lanes all the time, you'll get a very attenuated signal throughout the entire network. If some connections are stronger than others, that's when you can actually see interesting behavior.
[+] snarfy|3 years ago|reply
Neither always red nor always green, but also change with a timing related to the distance and speed.
[+] quantum_mcts|3 years ago|reply
"The Principles of Deep Learning" paper https://arxiv.org/abs/2106.10165 has a rather rigorous (based on Quantum Field Theory (QFT) mathematical apparatus) analysis of modern deep learning with the similar insight. They suggest that the learning happens in the critical regimes. And use running couplings, renormalization group and other fancy OFT math to derive some insights in the DL field. Here's a HN thread, by the way, https://news.ycombinator.com/item?id=31051540.
[+] tgv|3 years ago|reply
If you look for something in a complex system, and you look hard enough, you're probably going to find it. The example of epilepsy might just be seeing certain behavior through the lens of the theory. Unfortunately, the article fails to give us any hard definition of criticality.
[+] lawrencehook|3 years ago|reply
I wonder if it makes any sense to ask if complexity and criticality are inextricable?

https://en.wikipedia.org/wiki/Ramsey_theory

> Problems in Ramsey theory typically ask a question of the form: "how big must some structure be to guarantee that a particular property holds?"

[+] mach1ne|3 years ago|reply
Amen. I think the generosity of complex systems for different interpretations is the bane of comprehensive model for neuroscience.
[+] DecayingOrganic|3 years ago|reply
Any idea on how this would affect learning with a spaced repetition software? Perhaps, the practice of excessive recalling with, say, Anki could essentially be detrimental to learning in some aspects? As it would make certain connections in a neural network unnaturally strong and cause saturation and overactivation in the last layer.
[+] michaelcampbell|3 years ago|reply
Isn't the point to specifically NOT "excessively recall"? SRS' attempt to get you to recall something about the time you're about to forget it, and not before.
[+] varjag|3 years ago|reply
Wonder if there's a connection with Ballmer Peak.
[+] vermilingua|3 years ago|reply
I doubt Ballmer Peak needs to reach as far down as neuroscience to find a basis: developers tend to overthink, and alcohol de-thinks.
[+] Throwawayh89|3 years ago|reply
As someone with epilepsy I found this article fascinating.

Over the years I've been forced to learn a lot about neurology on my own. I've never seen anything or anyone explain how the brain operates this simply.

We're on the edge of revolution in the way neurological disease are treated.

[+] jcutrell|3 years ago|reply
For someone with a high level understanding of neural networks - isn’t this essentially describing what the weighted connections accomplish for a tuned network?
[+] revskill|3 years ago|reply
To me, it's always after sleep.