(no title)
apl | 7 years ago
In October '17, Cueva & Wei put out a(n anonymous) paper that recapitulates the core result almost exactly -- that training a recurrent neural network to perform dead reckoning/path integration gives you intermediate units whose place fields strongly resemble grid cells. Critically, this only happens when regularization is applied; Cueva/Wei used noisy inputs and DeepMind implemented 50% stochastic dropout in the intermediate linear layer. There are some superficial differences (generic RNN units vs. LSTM), but at their core these studies are virtually identical. Check it out:
https://openreview.net/forum?id=B17JTOe0-
What I don't get -- why doesn't DeepMind acknowledge this result? Sure, the Nature paper was submitted in July '17, but these things go through many revisions. Clearly, DeepMind went a bit further with the whole integrating visual CNNs/grid cells part. Nonetheless: Fig. 1 is the core result, everything from Fig. 2 onwards is nice-to-have but not essential, and I feel like Cueva/Wei got there first.
Ah, well. At least the minor controversy brings in great publicity for the Cueva/Wei paper.
No comments yet.