top | item 11902716

Improved Techniques for Training GANs – OpenAI's first paper

126 points| gwulf | 9 years ago |arxiv.org

12 comments

order

nl|9 years ago

So this is pretty interesting.

One of the great hopes of the current deep learning boom is that somehow we will develop unsupervised or at least semi-supervised techniques which can perform close to the great results that are being seen with supervised learning.

Adversarial Networks is one of the more likely routes to semi-supervised. There is also a lot of interesting work in combining Bayesian optimization techniques with Deep Networks to develop one-shot learning[1][2]. Some of this was (very broadly) in response to the the one-shot learning paper coming out of (I've forgotten!!) where the authors are famously doubtful about the utility of Deep Learning, and showed somewhat competitive results on MNIST. (I can't remember who it was - there have been HN discussions about the group. Sorry!!)

Both OpenAI and DeepMind are following roughly similar paths here (no surprise really), and the results are looking really good.

[1] http://arxiv.org/abs/1603.05106

[2] http://arxiv.org/abs/1606.04080

imh|9 years ago

When they find the visual turing test results important enough to put in the abstract, it's a shame they only include tiny images in the paper :(

franciscop|9 years ago

What is the license for the paper? I can see it's licensed for Arxiv to distribute, but I cannot see any open access/distribution besides that.

Basically, can I redistribute this paper in my website? if so, under what license?

PS, great job

modeless|9 years ago

It would be nice if there was an explicit CC license, but ArXiv has a perpetual license to distribute it, and you can link to ArXiv. Why would you host the paper yourself?

Edit: The code has been published as well, under the MIT license. https://github.com/openai/improved-gan