What a lovely tutorial. It should also be noted that autoencoders are useful for supervised learning as feature generator for more effective (problem specific) techniques than NNs like GBMs.
Could you please elaborate on this. I would really like to know if autoencoders are still useful for classification if I have only labels for a small part of my training data. Is unsupervised pretraining still useful or was it completely replaced by other techniques as the article somehow seems to suggest?
I don't understand the image denoising. He put noise in the images, but never use them in the code.. It's an error from the author or I missed something?
It's both simple to use and very easy to customize to build ad-hoc architectures with custom nodes. The development is very active and it's well documented.
It can also use either tensorflow or theano as runtime.
[+] [-] hooloovoo_zoo|10 years ago|reply
[+] [-] nomailing|10 years ago|reply
[+] [-] isseu|10 years ago|reply
edit: Author fixed it
[+] [-] glial|10 years ago|reply
[+] [-] ogrisel|10 years ago|reply
It can also use either tensorflow or theano as runtime.
[+] [-] callesgg|9 years ago|reply
[+] [-] msandford|9 years ago|reply
[+] [-] callesgg|9 years ago|reply