top | item 10785191

(no title)

nrmn | 10 years ago

I've been trying to read a paper a day since midsummer. These are a few of the interesting, for me personally, since then:

Generating Sequences With Recurrent Neural Networks - http://arxiv.org/abs/1308.0850 Older one, but important to understand deeply since other recent ideas have come from this!

Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks - http://arxiv.org/abs/1511.06434

Unitary Evolution Recurrent Neural Networks - http://arxiv.org/abs/1511.06464

State of the Art Control of Atari Games Using Shallow Reinforcement Learning - http://arxiv.org/abs/1512.01563 Interesting discussion in section 6.1 on the shortcomings/issues of DQN done by Deepmind

Spectral Representations for Convolutional Neural Networks - http://arxiv.org/abs/1506.03767

Deep Residual Learning for Image Recognition - http://arxiv.org/abs/1512.03385

Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs) - http://arxiv.org/abs/1511.07289 I wish they did more comparisons between similar network architecture with only the units swapped out. Eg. Alexnet, Relu vs Alexnet, Elu.

On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models - http://arxiv.org/abs/1511.09249

Just a few from my list :)

discuss

order

sgt101|10 years ago

Crumbs, it takes me about two weeks to get through a paper properly!

wodenokoto|10 years ago

I can't speak for parent, but I believe people who read a paper a day, don't try to understand it deeply enough to be able to start implementing whatever the paper talks about. Rather it is read to get an idea of the approach and what kind of results it will give and what kind of problems it can solve.