To draw too many parallels here would be like comparing stick figures to still life paintings and proclaiming "They're both flowers!" While it might be true you won't learn much about still life paintings from stick figures.
At best this research says something about the task of navigation and optimal representations for that task rather than anything profound about neural networks other than they can both optimize for some task—which should surprise no one.
> The “grid units” that spontaneously emerged in the network were remarkably similar to what’s seen in animals’ brains, right down to the hexagonal grid.
Could someone with more experience in ML explain what this means? In what sense do NN cells have positions or geometry? What are the NN heat maps below the quote showing?
The neurons themselves don’t form grids. A map of the points where firing of a grid neuron in happens the real world forms a triangular/ hexagonal grid.
These neurons seem to have discovered what board gamers found out much later - hexagonal grids are better for calculating movement.
On a train and don't have great wifi -- but there was a super-cool poster at ICLR which demonstrated that training an RNN to perform dead-reckoning naturally produced grid-like cells. Is this an extension of that work? Or an independent discovery of the same phenomenon?
I'm trying to reproduce that original work, so far without success.
Are you referring to the Cueva paper? I also spent a considerable amount of time trying to replicate it without success. Do you know if the author has released any code yet?
I wonder what the coordinate plane is for the ML visualizations and how it relates to same for visualizations from a physical brain. Seems ripe for gaming.
This supports the idea that, increasingly, machine learning is coming back full circle to support neuroscience. Previously, AI researchers looked at the brain for inspiration and now more than ever neuroscientists are being inspired by advances in deep learning, etc.
I don't doubt that there are many similarities between how machine learning works and how brains work, but this seems like a pretty myopic trend of confirmation bias. Neurons and brains are so much more complex than machine learning and it will be really unfortunate if we limit ourselves to the machine learning model in neuroscience.
this would be the dream, but biological neural networks don't do gradient decent, so have little to do with ANNs until we discover how they really train.
Natural reinforcement learning [1] was known before the mathematical one. Of course, people often mistake Markov chains with reality, but they still can be useful even if in completely unexpected ways like with DRL.
[1] Rescorla RA, Wagner AR. A theory of Pavlovian conditioning: variations in the effectiveness of reinforcement and nonreinforcement. In: Black AH, Prokasy WF, editors. Classical conditioning II. New York: Appleton-Century Crofts; 1972. pp. 64–99.
[+] [-] jimfleming|7 years ago|reply
At best this research says something about the task of navigation and optimal representations for that task rather than anything profound about neural networks other than they can both optimize for some task—which should surprise no one.
[+] [-] chiefalchemist|7 years ago|reply
The Internet...access to all the information in the world; most of which isn't new or worth knowing about. Bring a shovel. You're gonna need it.
[+] [-] zitterbewegung|7 years ago|reply
[+] [-] wyattpeak|7 years ago|reply
Could someone with more experience in ML explain what this means? In what sense do NN cells have positions or geometry? What are the NN heat maps below the quote showing?
[+] [-] mr_toad|7 years ago|reply
These neurons seem to have discovered what board gamers found out much later - hexagonal grids are better for calculating movement.
https://en.m.wikipedia.org/wiki/Grid_cell
[+] [-] limsup|7 years ago|reply
[+] [-] TaylorAlexander|7 years ago|reply
As a roboticist just beginning to read ML papers (to help in this very field!), this information would otherwise just be out of reach.
[+] [-] taliesinb|7 years ago|reply
I'm trying to reproduce that original work, so far without success.
[+] [-] jimfleming|7 years ago|reply
"Emergence of grid-like representations by training recurrent neural networks to perform spatial localization". https://arxiv.org/abs/1803.07770
It appears to be from Columbia vs DeepMind with different authors.
[+] [-] gimagon|7 years ago|reply
[+] [-] jcims|7 years ago|reply
[+] [-] zamalek|7 years ago|reply
[+] [-] eboyjr|7 years ago|reply
[+] [-] yosito|7 years ago|reply
[+] [-] make3|7 years ago|reply
[+] [-] bra-ket|7 years ago|reply
[+] [-] bitL|7 years ago|reply
[1] Rescorla RA, Wagner AR. A theory of Pavlovian conditioning: variations in the effectiveness of reinforcement and nonreinforcement. In: Black AH, Prokasy WF, editors. Classical conditioning II. New York: Appleton-Century Crofts; 1972. pp. 64–99.
[+] [-] r-bit-rare-e|7 years ago|reply
[deleted]
[+] [-] bluetwo|7 years ago|reply