top | item 8055183

(no title)

prajit | 11 years ago

A question about the actual slides: why don't they use unsupervised pretraining (i.e. Sparse Autoencoder) for predicting MNIST? Is it just to show that they don't need pretraining to achieve good results or is there something deeper?

discuss

order

colincsl|11 years ago

I've only been watching from the Deep Learning sidelines -- but I believe people have steered away from pretraining over the past year or two. I think on practical datasets it doesn't seem to help.