top | item 17047684

(no title)

apl | 7 years ago

It's certainly an interesting paper, but there's a bit of publication weirdness at play here.

In October '17, Cueva & Wei put out a(n anonymous) paper that recapitulates the core result almost exactly -- that training a recurrent neural network to perform dead reckoning/path integration gives you intermediate units whose place fields strongly resemble grid cells. Critically, this only happens when regularization is applied; Cueva/Wei used noisy inputs and DeepMind implemented 50% stochastic dropout in the intermediate linear layer. There are some superficial differences (generic RNN units vs. LSTM), but at their core these studies are virtually identical. Check it out:

https://openreview.net/forum?id=B17JTOe0-

What I don't get -- why doesn't DeepMind acknowledge this result? Sure, the Nature paper was submitted in July '17, but these things go through many revisions. Clearly, DeepMind went a bit further with the whole integrating visual CNNs/grid cells part. Nonetheless: Fig. 1 is the core result, everything from Fig. 2 onwards is nice-to-have but not essential, and I feel like Cueva/Wei got there first.

Ah, well. At least the minor controversy brings in great publicity for the Cueva/Wei paper.

discuss

order

No comments yet.