How the mouse's brain is scanned, very intrusively.[2] That's both impressive and scary. They're scanning the surface of part of the brain at a good scan rate at high detail. They're
seeing the activation of individual neurons. This is much finer detail than non-intrusive functional MRI scans.
Does the data justify the conclusions? The maze being used is a simple T-shaped maze. The "state machine" supposedly learned is extremely simple. They conclude quite a bit about the learning mechanism from that. But now that they have this experimental setup working, there should be more results coming along.
1. We can observe how the state machine gets generated, first just a jumble of locations in a hub and spokes topology (no correlations), then some correlations start happening pairwise, making a kind of a beads on a string topology, and then finally the mental model snaps marvelously to two completely separate paths that meet at ends. It's amazing to see these mental models get formed in vivo out of initial unstructured perceptions.
2. In addition to standard HMM modeling, authors find that a "biologically plausible recurrent neural network (RNN) trained using Hebbian learning" can mimic some of this (but not exactly). But more interestingly, they find that LSTMs or transformers cannot. Which makes sense structurally, but it's a good reminder for those who believe the anthropomorphic hype that transformers have memory or other such (they don't :) ).
So technically we have the technology to simulate a human brain. Just not anywhere near real time. And not at any semblance of reasonable cost. And not guaranteed to simulate the important parts.
The rule in the house is we don't say "I don't know." If we don't know something, we are required to think about it and then ask a question.
Recently, he asked how an audio recoding dog trainer worked in terms of how it "went back up" because he couldn't see the internals. He knew that it went down and back up, and then knew that it was not electronic, but mechanical. I asked him to think about it. He thought, and I could see his mind working, thinking about everything in his mind where a toy of his would go down and up. He sat for around 20 seconds and asked, is it a spring? I was quite impressed considering he is 4 years old and was able to come to this conclusion.
There is a map we create, a list of things that go up and down. From that list of things that can go up and down, knowing it was not a pulley or a plunger, because it returned to its original state, he's able to limit it to the one object that would work.
The biggest jumps in my education have been directly related to people mapping Concepts and ideas instead of memorization. That's like the idea that everything is a file. From that framework you can pull out questions like, can I read, do I have permissions to read, can I write, so now when someone explains to me I knew shiny object, like a fancy Wiz Bang database, I asked a couple questions and generally know how it works.
That's something I do too, but a bit modified: instead of going to someone about a problem, we try and figure it out instead, and if we can't, we go and ask help while explaining our original solution.
It seems to reflect the general way we understand the brain right? Wiring together/firing together? Then ~ abra cadabra ~ meaningful blobs of brain buzzy stuff emerge from seemingly simple rules? It seems beautifully pure that mind maps are literally "mind maps" in a sense, a bit like we have grid cells arranged proximally to mirror physical spaces as we walk through them.
They have the largest hippocampus to brain ratio and have 3D spatial memory of all their food source locations as well as (with ravens) who their human enemies are.
A hidden state machine plus a neural net appears to be similar to how mice learn to navigate a maze.
If you hold them still and probe their brain while they navigate in VR you see a state-machine map appear in their mind. That map varies if the VR map varies.
2020-12: Some research done with human subjects regarding how the brain reacts when we're reading code.
> The researchers saw little to no response to code in the language regions of the brain. Instead, they found that the coding task mainly activated the so-called multiple demand network. This network, whose activity is spread throughout the frontal and parietal lobes of the brain, is typically recruited for tasks that require holding many pieces of information in mind at once, and is responsible for our ability to perform a wide variety of mental tasks. [1]
My naive perspective is that the foundational properties of the brain are probably really similar between mice and humans. For example, we have put human brain cells into rats and the brain cells have done... something.
The chemistry is probably different in a bunch of ways "rats evolved to use this hormone to feel X, we use it to tell us Y" or some other such thing, but structurally I'd imagine that neurons function similarly.
There are some important differences between mice and human hippocampi, including different long range connections...however the overall patterns of organization across the hippocampal subfields e.g. heavy recurancy in CA3, sparse separation of signals in dentate gyrus, etc...these are very similar in structure and response patterns between species. ..gotta love the spiking data in human epilepsy patients
[+] [-] Animats|2 years ago|reply
How the mouse's brain is scanned, very intrusively.[2] That's both impressive and scary. They're scanning the surface of part of the brain at a good scan rate at high detail. They're seeing the activation of individual neurons. This is much finer detail than non-intrusive functional MRI scans.
Does the data justify the conclusions? The maze being used is a simple T-shaped maze. The "state machine" supposedly learned is extremely simple. They conclude quite a bit about the learning mechanism from that. But now that they have this experimental setup working, there should be more results coming along.
[1] https://www.biorxiv.org/content/10.1101/2023.08.03.551900v2....
[2] https://bmcbiol.biomedcentral.com/articles/10.1186/s12915-01...
[+] [-] 60654|2 years ago|reply
1. We can observe how the state machine gets generated, first just a jumble of locations in a hub and spokes topology (no correlations), then some correlations start happening pairwise, making a kind of a beads on a string topology, and then finally the mental model snaps marvelously to two completely separate paths that meet at ends. It's amazing to see these mental models get formed in vivo out of initial unstructured perceptions.
2. In addition to standard HMM modeling, authors find that a "biologically plausible recurrent neural network (RNN) trained using Hebbian learning" can mimic some of this (but not exactly). But more interestingly, they find that LSTMs or transformers cannot. Which makes sense structurally, but it's a good reminder for those who believe the anthropomorphic hype that transformers have memory or other such (they don't :) ).
The scanning is indeed very intrusive, though.
[+] [-] ImHereToVote|2 years ago|reply
[+] [-] pizzafeelsright|2 years ago|reply
The rule in the house is we don't say "I don't know." If we don't know something, we are required to think about it and then ask a question.
Recently, he asked how an audio recoding dog trainer worked in terms of how it "went back up" because he couldn't see the internals. He knew that it went down and back up, and then knew that it was not electronic, but mechanical. I asked him to think about it. He thought, and I could see his mind working, thinking about everything in his mind where a toy of his would go down and up. He sat for around 20 seconds and asked, is it a spring? I was quite impressed considering he is 4 years old and was able to come to this conclusion.
There is a map we create, a list of things that go up and down. From that list of things that can go up and down, knowing it was not a pulley or a plunger, because it returned to its original state, he's able to limit it to the one object that would work.
The biggest jumps in my education have been directly related to people mapping Concepts and ideas instead of memorization. That's like the idea that everything is a file. From that framework you can pull out questions like, can I read, do I have permissions to read, can I write, so now when someone explains to me I knew shiny object, like a fancy Wiz Bang database, I asked a couple questions and generally know how it works.
[+] [-] aio2|2 years ago|reply
[+] [-] padolsey|2 years ago|reply
[+] [-] samstave|2 years ago|reply
They have the largest hippocampus to brain ratio and have 3D spatial memory of all their food source locations as well as (with ravens) who their human enemies are.
We can learn a lot about memory from these birds
[+] [-] ilaksh|2 years ago|reply
Is there anything optimized for GPU or TPU?
[+] [-] mdp2021|2 years ago|reply
[+] [-] MeriB|2 years ago|reply
[+] [-] hanniabu|2 years ago|reply
"This is a state machine", "this is a natural net", "it's running a coroutine", "it's garbage collection"...
[+] [-] convolvatron|2 years ago|reply
[+] [-] giardini|2 years ago|reply
[+] [-] gridspy|2 years ago|reply
If you hold them still and probe their brain while they navigate in VR you see a state-machine map appear in their mind. That map varies if the VR map varies.
[+] [-] x86x87|2 years ago|reply
[+] [-] bannedbybros|2 years ago|reply
[deleted]
[+] [-] throwaway290|2 years ago|reply
[+] [-] xjay|2 years ago|reply
> The researchers saw little to no response to code in the language regions of the brain. Instead, they found that the coding task mainly activated the so-called multiple demand network. This network, whose activity is spread throughout the frontal and parietal lobes of the brain, is typically recruited for tasks that require holding many pieces of information in mind at once, and is responsible for our ability to perform a wide variety of mental tasks. [1]
[1] https://news.mit.edu/2020/brain-reading-computer-code-1215
[+] [-] insanitybit|2 years ago|reply
The chemistry is probably different in a bunch of ways "rats evolved to use this hormone to feel X, we use it to tell us Y" or some other such thing, but structurally I'd imagine that neurons function similarly.
Anyone know more?
[+] [-] SubiculumCode|2 years ago|reply
[+] [-] Nowado|2 years ago|reply