top | item 205926

Do Bayesian statistics rule the brain?

13 points| dood | 18 years ago |mindhacks.com | reply

26 comments

order
[+] kurtosis|18 years ago|reply
Here's a contrary view: There is a very famous tradition in biology of deriving the form and function of organisms by a mathematical optimization (see D'arcy Thompson). It shouldn't be surprising that the nervous system would converge on Bayesian stats. The dutch book argument shows that any other method of updating beliefs about the world will lose money in a gambling strategy. If "gambling" is replaced by "foraging" or "mating" then Bayes is the optimal way to play. (whether we're playing for the interests of the organism or its genes)

But saying that the brain is Bayesian is not that profound. It's like saying that the brain is ruled by electricity. The key is what priors are being modeled, how is inference implemented with neurons, and what constraints, or "hyperparameters" are built into these priors.

[+] ralphb|18 years ago|reply
"But saying that the brain is Bayesian is not that profound."

And, furthermore, it's not really a new idea.

Judea Pearls work in the late 80's suggested something along these lines. I have also seen suggested that some of Marvin Minskys groundlaying work back in the 60's pointed in this direction, but YMMV on that one.

[+] LPTS|18 years ago|reply
You obviously know more about the science and math here than I do. So I'm offering these thoughts deferentially. I'm coming at this as a jack of all trades type who did a ton of philosophy of mind and cognitive science stuff in college but somewhat deficient with the hard science.

"What priors are being modeled"

I think this is the most interesting thing. There is so much information about how memories are recreated as you remember them, and about memory being unreliable. Even if you had an answer to the question "what priors are being modeled?", it would need to be indexed to a time. Indexing your question to times is where it gets interesting.

This would create a question that would be a lot harder and necessary to understand to get at the truth of this. Something like "How can the constantly changing sea of fragmented memories we base our idea of self on serve as reliable priors at all." Or, less poetically and more formally:

Given a person, P, at 2 times, T1 and T2, and two sets of priors, M1 and M2, such that M1 and M2 belong to P, at T1 and T2, respectively, and between T1 and T2, P obtained exactly one datum, D, what is the relationship between M1 at T1 in P, and (M2-D) at T2 in P?

And, generally, what calculus explains the relationship between Mx at Tx in P, and (My-Dn) at Ty in P, where Dn is the sum of the data P acquired between Tx and Ty?

I don't think M1 is equivalent to M2-D in the first, and I don't think that My-Dn is equivalent to Mx in the second. I think all the evidence about memories being inconsistent, recreated, count towards my intuition. I also think the way things like a sudden smell can cause priming of memories, or an emotional state can alter memory recall suggest this.

"how is inference implemented with neurons"

This seems like the most trivial, least profound, and most uninteresting part. We know that the brain does these inferences, and we know that even very simple systems based on cellular automation can be turing complete and make inferences. The particular details will not shed light on the fundamental problems understanding mind.

"what constraints, or "hyperparameters" are built into these priors."

It sounds to me like their idea that hallucinations and delusions as breakdowns of Bayesian statistics functionality would bear fruit that could answer this question, although I don't have an answer.

I also wonder, wasn't "the brain is ruled by electricity" a profound insight for it's time?

[+] schtog|18 years ago|reply
if bayesian probability is how the brain works, why is it so hard for most people to understand this: http://en.wikipedia.org/wiki/Monty_Hall_problem

seriously, even the most clever people even have huge trouble grasping it.

[+] gnaritas|18 years ago|reply
Because people don't necessarily have a conscience understanding of how the lower layers of their minds work. Being a Bayesian neural network doesn't imply you understand Bayesian neural networks in general.
[+] robg|18 years ago|reply
No, but it's as good a description as any for neuronal interactions. The problem though is not with a finite set of connections, but rather the infinite number of possibilities.
[+] eru|18 years ago|reply
Or rather - very large number of possibilities.
[+] michael_dorfman|18 years ago|reply
Anyone have a clue as to why (or, indeed, whether) Bayesian mathematics work any better than the more old-school neural network algorithms?
[+] socksandsandals|18 years ago|reply
The reasons why the neural network model failed and Bayesian inference is much more applicable are described very well by Jeff Hawkins in his book "On Intelligence" (http://onintelligence.com/). I highly recommend it, if just for the the questions and trains of thoughts it raises.
[+] robg|18 years ago|reply
Honestly, I think it just comes down to simplicity. Bayesian algorithms tend to make fewer assumptions a priori. By contrast, look at something like backprop.