top | item 29039969

The brain as a universal learning machine (2015)

85 points| optimalsolver | 4 years ago |lesswrong.com | reply

29 comments

order
[+] thedstrat|4 years ago|reply
One thing that isn't central at all, but it stood out to me.

"The amygdala appears to do something similar for emotional learning. For example infants are born with a simple versions of a fear response, with is later refined through reinforcement learning."

Positive and negative emotions can be seen as a reward/punishment mechanism - the goal of a reinforcement learning policy. Our brain is able to change this policy (what defines a positive or negative emotion) over time as our emotional intelligence matures. For example, when we are babies, we cry at anything that scares us. As we get older, we mature and change the emotional reaction automatically. In the example, we learn that not everything should scare us. I never realized that the brain (or ULM) can modify everything, including it's own policies, in response to external stimulus.

[+] lostmsu|4 years ago|reply
> I never realized that the brain (or ULM) can modify everything, including it's own policies, in response to external stimulus.

This statement does not make sense. For the brain, learning is the process of modifying policies. It is possible nothing else happens when brain is learning.

[+] wombatmobile|4 years ago|reply
> Additional indirect support comes from the rapid unexpected success of Deep Learning[7], which is entirely based on building AI systems using simple universal learning algorithms... scaled up on fast parallel hardware (GPUs). Deep Learning techniques have quickly come to dominate most of the key AI benchmarks including vision[12], speech recognition[13][14], various natural language tasks, and now even ATARI [15] - proving that simple architectures (priors) combined with universal learning is a path (and perhaps the only viable path) to AGI.

Proving?

only viable path?

[+] joe_the_user|4 years ago|reply
This article presents an emerging architectural hypothesis of the brain as a biological implementation of a Universal Learning Machine.

Looked in the section titled "Universal Learning Machine", I looked at the footnotes (easy, there are none), I googled and used Google Scholar. I found no coherent definition of Universal Learning Machine.

I mean, the section I mentioned says: "An initial untrained seed ULM can be defined by 1.) a prior over the space of models (or equivalently, programs), 2.) an initial utility function, and 3.) the universal learning machinery/algorithm. The machine is a real-time system that processes an input sensory/observation stream and produces an output motor/action stream to control the external world using a learned internal program that is the result of continuous self-optimization." But it's using other vaguely defined concepts in a fairly vague fashion.

What the author is defining is kind of like a Godel Machine [1] or Symbolic Regression[2], to give two more concrete references than I've found in the text (well, I'm only skimming).

The key defining characteristic of a ULM is that it uses its universal learning algorithm for continuous recursive self-improvement with regards to the utility function (reward system).

And there the author gets much more specific and the claim is much more debatable. Of course, if you leave "continuous" vague, then you have something vague again. If you're loose enough, the brain, by your loose definition, has utility function. But that easily be true but not useful. Every at least macro physical system can be predicted by solving it's Lagrangian but the existence of many, many intractable macro physical system just implies many, many unsolvable or unknown or unknowable Lagrangians.

I think the problem with outlines like this, that I think are somewhat typical for broad-thinker/amateurs, is not that it's a priori bad place start looking at intelligence. It might be useful. But without a lot of concrete research, you wind-up seemingly simple steps like "We just maximize function R" when any know method for such maximization would take longer than the age of the universe (problem of a Godel Machine). Which again, isn't necessarily terrible - maybe you have an idea how to much more simply approximately maximize the function in much less time. But you know what you're up against.

I present a rough but complete architectural view of how the brain works under the universal learning hypothesis.

Keep in mind that to claim a rough outline of how the brain operates is claim more than the illustrious neuroscientist of today would claim.

[1] https://en.wikipedia.org/wiki/G%C3%B6del_machine [2] https://en.wikipedia.org/wiki/Symbolic_regression

[+] smallmouth|4 years ago|reply
The brain is not a machine. It's a gateway.
[+] tasty_freeze|4 years ago|reply
Great, I'd like to ask you some questions, as most talk I've heard along these lines is beyond vague. I'd be great if you could clarify some questions I have about the idea. My questions might be so off-base from your mental model of how things work they may seem ridiculous, but that would stem from me never hearing more than vague hand waves about "radio receiver" brains and such.

#1: What is the division of labor between the physical mind (PM) and the non-physical mind (NPM)? Eg, is the NPM doing all the thinking, and the PM is just carrying out the instructions? Or does the PM do some share of the work and the NPM just nudges it when need be, like making free will decisions?

#2: What is the NPM doing while the PM is sleeping? There is some metabolic reason for the mind to sleep 1/3 of the time, but presumably the NPM has no such need. Is it still thinking all that time, or does it sleep too?

#3: When the PM is damaged in specific ways, perhaps catastrophically, what do you think the NPM is doing? Does it get frustrated that the PM can no longer receive the full message? For example, in the case of an Alzheimers patient.

#4: By what mechanism does the NPM communicate its thoughts/wishes to the PM? Does it incur a violation of the physical laws in the PM?

#5: Likewise to #4, how does the PM communicate to the NPM so the NPM knows what is going on?

Because written communication is ambiguous, I'll explicitly state these are sincere questions.

[+] mdp2021|4 years ago|reply
I really believe that in this communicational context - in these pages -, statements 'A is B' should always be accompanied by relevant sufficient (for the context) justification. There is legitimately no 'A is B', there is only 'One can state that A is B owing to C'.

Otherwise, anyone here could state 'Neurons function through lightbeams (full-stop)', 'Neutrinos are Leibniz's monads (full-stop)', 'Filippa's Republic is better than Western Democracy (full-stop)', 'Smith is wrong (full-stop)', 'The ratio of circumference and diameter is clever (full-stop)'...

If anybody stated 'A is B (full-stop)', another could come up with "No it isn't". We would be at Monty Python's Argument Sketch - a parody of the "drily strictly professional" soulless¹ spoilt cheap service associated (in some cultures) to brothels.

[+] edgyquant|4 years ago|reply
In what way is the brain not a machine? Even if it is a gateway, whatever you mean by that, the two aren’t mutually exclusive.
[+] IIAOPSW|4 years ago|reply
I think the brain is a skateboard. Or perhaps a cupholder.