Very good write up. If you want to trade speed and memory for accuracy, you can make a large lookup table for your sigmoidal function which should just about double the speed of it.
As an aside, and not to be too critical, because the post was great, but as (presumably) a non-native English speaker, you might do a spell-checker on your post. There are also some missing pronouns which make some sentences very Spanishy.
A relatively small lookup table for the sigmoid function can also work well. Here are the various sigmoid approximations that Theano (a library used for deep learning research among other things) offers: http://deeplearning.net/software/theano/library/tensor/nnet/...
Definitely a native spanish speaker here :P. Because I wrote this on an iPython notebook it takes a little bit longer to spell-check. I will try not to be so lazy next time.
Both datasets you used (iris and digits) are way too simple for neural networks to shine.
Neural networks / deep neural networks work best in domains where the underlying data has a very rich, complex, and hierarchical structure (such as computer vision and speech recognition). Currently, training these models is both computationally expensive and fickle. Most state of the art research in this area is performed on GPU's and there are many tuneable parameters.
For most typical applied machine learning problems, especially on simpler datasets that fit in RAM, variants of ensembled decision trees (such as Random Forests) to perform at least as well as neural networks with less parameter tuning and far shorter training times.
Not for nothing but Ben did you read the article? He's not even discussing most of what you mention. He is simply taking his learning and applying it. You seem to be going off on a tangent about advanced applications where he is obviously just learning about how these things work and not trying to teach a method or suggesting that he has discovered anything significant..
To the author: I liked the article. A simple, concise read.
Handwritten digits actually is a pretty good domain for deep nets, and the poor performance achieved in the article's case is due to the implementation (it needs deeper net, convolutional layer, etc). In that case much better (99%+) results have been achieved by deep nets for digit recognition. In fact, Hinton (in his Coursera course) recommends this domain for studying deep nets, since it is so well understood.
(Ben I know you're aware of all this already, but I just wanted to clarify for those who aren't as on top of the research as you)
You are just doing a simple validation on a test set rather than cross-validation; the point of CV is to make many iterations of validation on different train-test splits and average the results.
I agree completely a more complex benchmark should be done with a complete cross-validation.
Just for future reference I did ran the fitting a few times founding very(+-2%) similar results. Also Random Forests do an average so probably not much to improve on that particular algorithm.
Same here with Firefox 22.0. With 2560x1440 resolution you get three columns for the code blocks. It looks fine in IE 10. IE renders the page with only one column independent of window width.
What learning scientists think brain actually uses? Back-propagation and such seem like a method god would use to architect static brain for given task.
Backprop falls within the class of 'supervised learning' which can indeed be said not to be very biologically realistic. However, reinforcement learning is observed, so the overall picture is probably much more complex: e.g. associative/recurrent/etc networks with Hebb-like unsupervised learning developing neuronal group testing and selection systems that involve reinforcement learning. (see first lecture/talk in [3].)
Perhaps worth a watch is a very nice talk by Geoffrey Hinton [2], which is oft referred to on HN. (Hinton does refer to the notion of biological plausibility etc. in this talk as far as I recall, but the focus is elsewhere (developing next generation state-of-the-art (mostly unsupervised) machine learning techniques/systems.))
[3]: http://kostas.mkj.lt/almaden2006/agenda.shtml (The original summary HTML file is gone from the original source, so this is a mirror; the links to videos and slides do work, though.) The first and the second talks are somewhat relevant (particularly the first one, re: bio plausibility etc ("Nobelist Gerald Edelman, The Neurosciences Institute: From Brain Dynamics to Consciousness: A Prelude to the Future of Brain-Based Devices")), but all are great. Rather heavy, though. (Also, skip the intros.)
edit that first talk/lecture from Almaden (Edelman's) is actually a very nice exposure of the whole paradigm in which {cognitive,computational,etc} neuroscience rests; it does get hairy later on; overall, it's a great talk for the truly curious.
[+] [-] gamegoblin|12 years ago|reply
As an aside, and not to be too critical, because the post was great, but as (presumably) a non-native English speaker, you might do a spell-checker on your post. There are also some missing pronouns which make some sentences very Spanishy.
[+] [-] gallamine|12 years ago|reply
[+] [-] lightcatcher|12 years ago|reply
[+] [-] dfrodriguez143|12 years ago|reply
Thanks for the tips.
[+] [-] benhamner|12 years ago|reply
Neural networks / deep neural networks work best in domains where the underlying data has a very rich, complex, and hierarchical structure (such as computer vision and speech recognition). Currently, training these models is both computationally expensive and fickle. Most state of the art research in this area is performed on GPU's and there are many tuneable parameters.
For most typical applied machine learning problems, especially on simpler datasets that fit in RAM, variants of ensembled decision trees (such as Random Forests) to perform at least as well as neural networks with less parameter tuning and far shorter training times.
[+] [-] sine_dicendo|12 years ago|reply
To the author: I liked the article. A simple, concise read.
[+] [-] jph00|12 years ago|reply
(Ben I know you're aware of all this already, but I just wanted to clarify for those who aren't as on top of the research as you)
[+] [-] theschreon|12 years ago|reply
- Resilient Propagation (RPROP), it significantly speeds up training for full batch learning: http://davinci.fmph.uniba.sk/~uhliarik4/recognition/resource...
- RMSProp, introduced by Geoffrey Hinton, also speeds up training but can also be used for mini-batch learning: https://class.coursera.org/neuralnets-2012-001/lecture/67 (sign up to view the video)
Please consider more datasets when benchmarking methods:
- MNIST ( 70k 28x28 pixel images of handwritten digits ): http://yann.lecun.com/exdb/mnist/ . There are several wrappers for Python on github.
- UCI Machine Learning Repository: http://archive.ics.uci.edu/ml/datasets.html
[+] [-] dfrodriguez143|12 years ago|reply
Thanks for the suggestions.
[+] [-] mbq|12 years ago|reply
[+] [-] dfrodriguez143|12 years ago|reply
Just for future reference I did ran the fitting a few times founding very(+-2%) similar results. Also Random Forests do an average so probably not much to improve on that particular algorithm.
[+] [-] lelandbatey|12 years ago|reply
http://puu.sh/3vTL8.png
[+] [-] rullgrus|12 years ago|reply
[+] [-] dfrodriguez143|12 years ago|reply
Which browser are you using?
[+] [-] scotty79|12 years ago|reply
[+] [-] wfn|12 years ago|reply
Backprop falls within the class of 'supervised learning' which can indeed be said not to be very biologically realistic. However, reinforcement learning is observed, so the overall picture is probably much more complex: e.g. associative/recurrent/etc networks with Hebb-like unsupervised learning developing neuronal group testing and selection systems that involve reinforcement learning. (see first lecture/talk in [3].)
Perhaps worth a watch is a very nice talk by Geoffrey Hinton [2], which is oft referred to on HN. (Hinton does refer to the notion of biological plausibility etc. in this talk as far as I recall, but the focus is elsewhere (developing next generation state-of-the-art (mostly unsupervised) machine learning techniques/systems.))
[1]: https://en.wikipedia.org/wiki/Hebbian_theory
[2]: https://www.youtube.com/watch?v=AyzOUbkUf3M
[3]: http://kostas.mkj.lt/almaden2006/agenda.shtml (The original summary HTML file is gone from the original source, so this is a mirror; the links to videos and slides do work, though.) The first and the second talks are somewhat relevant (particularly the first one, re: bio plausibility etc ("Nobelist Gerald Edelman, The Neurosciences Institute: From Brain Dynamics to Consciousness: A Prelude to the Future of Brain-Based Devices")), but all are great. Rather heavy, though. (Also, skip the intros.)
edit that first talk/lecture from Almaden (Edelman's) is actually a very nice exposure of the whole paradigm in which {cognitive,computational,etc} neuroscience rests; it does get hairy later on; overall, it's a great talk for the truly curious.
[+] [-] primelens|12 years ago|reply
[+] [-] dfrodriguez143|12 years ago|reply
Gonna add a direct link from the site soon.
[+] [-] skatenerd|12 years ago|reply