top | item 17031660

Artificial Neural Nets Grow Brainlike Navigation Cells

151 points| digital55 | 7 years ago |quantamagazine.org

47 comments

order
[+] jimfleming|7 years ago|reply
To draw too many parallels here would be like comparing stick figures to still life paintings and proclaiming "They're both flowers!" While it might be true you won't learn much about still life paintings from stick figures.

At best this research says something about the task of navigation and optimal representations for that task rather than anything profound about neural networks other than they can both optimize for some task—which should surprise no one.

[+] chiefalchemist|7 years ago|reply
Things get over-stated because there would be nothing worthy of publishing otherwise. It's the same (f'd up) model that drives the MSM, etc.

The Internet...access to all the information in the world; most of which isn't new or worth knowing about. Bring a shovel. You're gonna need it.

[+] zitterbewegung|7 years ago|reply
Yea same thing happened with fractal patterns and nebula's in deep space 20 years ago. (Also neurons too)
[+] wyattpeak|7 years ago|reply
> The “grid units” that spontaneously emerged in the network were remarkably similar to what’s seen in animals’ brains, right down to the hexagonal grid.

Could someone with more experience in ML explain what this means? In what sense do NN cells have positions or geometry? What are the NN heat maps below the quote showing?

[+] mr_toad|7 years ago|reply
The neurons themselves don’t form grids. A map of the points where firing of a grid neuron in happens the real world forms a triangular/ hexagonal grid.

These neurons seem to have discovered what board gamers found out much later - hexagonal grids are better for calculating movement.

https://en.m.wikipedia.org/wiki/Grid_cell

[+] taliesinb|7 years ago|reply
On a train and don't have great wifi -- but there was a super-cool poster at ICLR which demonstrated that training an RNN to perform dead-reckoning naturally produced grid-like cells. Is this an extension of that work? Or an independent discovery of the same phenomenon?

I'm trying to reproduce that original work, so far without success.

[+] jimfleming|7 years ago|reply
I think you're referring to this paper:

"Emergence of grid-like representations by training recurrent neural networks to perform spatial localization". https://arxiv.org/abs/1803.07770

It appears to be from Columbia vs DeepMind with different authors.

[+] gimagon|7 years ago|reply
Are you referring to the Cueva paper? I also spent a considerable amount of time trying to replicate it without success. Do you know if the author has released any code yet?
[+] jcims|7 years ago|reply
I wonder what the coordinate plane is for the ML visualizations and how it relates to same for visualizations from a physical brain. Seems ripe for gaming.
[+] zamalek|7 years ago|reply
As it is dead reckoning, I assume it is relational/velocity.
[+] eboyjr|7 years ago|reply
This supports the idea that, increasingly, machine learning is coming back full circle to support neuroscience. Previously, AI researchers looked at the brain for inspiration and now more than ever neuroscientists are being inspired by advances in deep learning, etc.
[+] yosito|7 years ago|reply
I don't doubt that there are many similarities between how machine learning works and how brains work, but this seems like a pretty myopic trend of confirmation bias. Neurons and brains are so much more complex than machine learning and it will be really unfortunate if we limit ourselves to the machine learning model in neuroscience.
[+] make3|7 years ago|reply
this would be the dream, but biological neural networks don't do gradient decent, so have little to do with ANNs until we discover how they really train.
[+] bra-ket|7 years ago|reply
why use "deep reinforcement learning" at all, is there any basis to believe it's a valid model of biological learning?
[+] bitL|7 years ago|reply
Natural reinforcement learning [1] was known before the mathematical one. Of course, people often mistake Markov chains with reality, but they still can be useful even if in completely unexpected ways like with DRL.

[1] Rescorla RA, Wagner AR. A theory of Pavlovian conditioning: variations in the effectiveness of reinforcement and nonreinforcement. In: Black AH, Prokasy WF, editors. Classical conditioning II. New York: Appleton-Century Crofts; 1972. pp. 64–99.

[+] bluetwo|7 years ago|reply
Give me a valuable use for this and I'll give you an up-vote.