One of the great hopes of the current deep learning boom is that somehow we will develop unsupervised or at least semi-supervised techniques which can perform close to the great results that are being seen with supervised learning.
Adversarial Networks is one of the more likely routes to semi-supervised. There is also a lot of interesting work in combining Bayesian optimization techniques with Deep Networks to develop one-shot learning[1][2]. Some of this was (very broadly) in response to the the one-shot learning paper coming out of (I've forgotten!!) where the authors are famously doubtful about the utility of Deep Learning, and showed somewhat competitive results on MNIST. (I can't remember who it was - there have been HN discussions about the group. Sorry!!)
Both OpenAI and DeepMind are following roughly similar paths here (no surprise really), and the results are looking really good.
The paper presents improved techniques for training Generative Adversarial Networks (GANs). Code is published here: https://github.com/openai/improved-gan (uses TensorFlow, Theano, Lasgne)
It would be nice if there was an explicit CC license, but ArXiv has a perpetual license to distribute it, and you can link to ArXiv. Why would you host the paper yourself?
nl|9 years ago
One of the great hopes of the current deep learning boom is that somehow we will develop unsupervised or at least semi-supervised techniques which can perform close to the great results that are being seen with supervised learning.
Adversarial Networks is one of the more likely routes to semi-supervised. There is also a lot of interesting work in combining Bayesian optimization techniques with Deep Networks to develop one-shot learning[1][2]. Some of this was (very broadly) in response to the the one-shot learning paper coming out of (I've forgotten!!) where the authors are famously doubtful about the utility of Deep Learning, and showed somewhat competitive results on MNIST. (I can't remember who it was - there have been HN discussions about the group. Sorry!!)
Both OpenAI and DeepMind are following roughly similar paths here (no surprise really), and the results are looking really good.
[1] http://arxiv.org/abs/1603.05106
[2] http://arxiv.org/abs/1606.04080
T-A|9 years ago
This? https://www.technologyreview.com/s/544376/this-ai-algorithm-...
http://cims.nyu.edu/~brenden/LakeEtAl2015Science.pdf
mrdrozdov|9 years ago
Nice work team!
imh|9 years ago
nl|9 years ago
franciscop|9 years ago
Basically, can I redistribute this paper in my website? if so, under what license?
PS, great job
modeless|9 years ago
Edit: The code has been published as well, under the MIT license. https://github.com/openai/improved-gan
unknown|9 years ago
[deleted]