top | item 9025709

A Brief Overview of Deep Learning

86 points| tim_sw | 11 years ago |yyue.blogspot.com | reply

16 comments

order
[+] a-priori|11 years ago|reply
This is not scientific fact since it is conceivable that real neurons are much more powerful than artificial neurons, but real neurons may also turn out to be much less powerful than artificial neurons. In any event, the above is certainly a plausible hypothesis.

Biological neurons are indeed more powerful than (most) artificial neural network models because these models discard important characteristics of biological neurons:

* Spiking: Most artificial models are 'rate-based', where they gloss over the spiking behaviour of neurons by only modelling the firing rate. This discards all the various kinds of spiking behaviours (intrinsically spiking, resonators, bursting, etc.) as well as the relative timing of spikes. The relative timing is the basis for spike-timing dependent plasticity (STDP), which enables Hebbian learning and long-term potentiation -- two of the ways that networks learn to wire themselves together and process information.

* Conduction delays: Biological neural networks have a delay between when a spike is generated at the axon hillock and when it arrives at the postsynaptic neuron's dendritic arbour. This delay acts as a kind of like delay line memory in computers, where information can be 'stored' in-transit for short periods of time (in the ballpark of 0.5-40ms). And because different axons have different delays, information can be integrated over time by having one axon with a short delay and one with a long delay both end up at the same postsynaptic neuron.

[+] mabbo|11 years ago|reply
But on a computational power level, does that actually make them more powerful?

What I mean is that Finite state machines are less powerful computationally than context free grammars. A FSM cannot compute certain things that a CFG can. Further, a CFG can't compute certain things that a Turing machine can. But we do know that Neural networks like the ones being used for Deep Learning can compute anything a Turing machine can, and anything a Turing machine can compute, so can the NN. They're equivalent.

So the real question is this: do those features (spiking, conduction delays) actually make biological neural networks capable of computing something that Turing Machines and Artificial Neural Networks cannot?

I hypothesize the answer is "no". A Turing machine could simulate any of those features you've mentioned, and therefore an ANN could also simulate them. (But I would love to be wrong about it, that would be amazing if human minds could do something that no machine would ever be capable of!)

[+] aswanson|11 years ago|reply
Thanks, nice summary of something I intuitively pondered but never expressed about this. I've had arguments with other engineers about this topic but could never garner more than "but the models don't match the biology!"
[+] sp332|11 years ago|reply
Not to mention the wash of hormones and neurotransmitter chemicals that can dynamically amp up or damp down specific sections of the nervous system.
[+] m0g|11 years ago|reply
Biological analogies are too often misleading and confusing when talking about deep learning[1]. We currently have very little knowledge of the way the brain works and most analogies are only wild assumptions. The ones contained in this article are blunt and based on strictly nothing but the author's feelings. Please read with care.

[1]: http://spectrum.ieee.org/robotics/artificial-intelligence/ma...

[+] postitnotecode|11 years ago|reply
The author mentioned: "a DNN with 2 hidden layer and a modest number of units can sort N N-bit numbers", does anyone have a reference to this result?
[+] pagnotta|11 years ago|reply
I remember reading an article saying that the memory storage in the brain may be at molecular level, making the brain's storage capacity really huge. Anyway, there are much more things going on in the brain than just deep neural networks.