(no title)
mcilai | 6 years ago
The important question to ask is: is the deep learning abstraction any good?
There's a very strong case to be made that the answer is yes: deep learning systems can perform many (of course, not all, at least not yet) tasks that involve perception (computer vision/speech recognition), motor control (the recent openai robot), language understanding (machine translation/BERT/GPT), planning (alphago/dota/the deepmind protein folding), and even some symbolic reasoning (the recent work from facebook on symbolic integration https://ai.facebook.com/blog/using-neural-networks-to-solve-...). Some of these tasks are performed at such a high level that they become commercially useful, and in some cases, surpass "human level".
So here we have a "model family" -- deep learning -- with a set of principles so simple that it can be studied with intense mathematical rigor (for example, https://arxiv.org/pdf/1904.11955.pdf or https://papers.nips.cc/paper/9030-which-algorithmic-choices-...), and that produces many of the behaviors we want out of brains (and not just behavioral: see, e.g., https://arxiv.org/abs/1805.10734: " Interestingly, recent work has shown that deep convolutional neural networks (CNNs) trained on large-scale image recognition tasks can serve as strikingly good models for predicting the responses of neurons in visual cortex to visual stimuli, suggesting that analogies between artificial and biological neural networks may be more than superficial." -- this is just one of many papers that show that even under the hood, trained deep learning systems exhibit many properties of biological neural networks).
These reasons strongly suggest (imho) that deep learning is in fact the newtonian theory of neuroscience. More strongly, no other theory comes remotely close in its simplicity and explanatory power.
friendlybus|6 years ago
Self driving cars can't leave an enclosed environment and might never do so safely.
Richard Dawkins spoke very highly of the brains ability to do some kind of natural calculus for the sake of tracking a ball in flight, but most animals run on simple tricks and reference points.
Deep learning might be the "good think" for the next ten years, some of us are not going to let go of the transcendent truth that the brain is not defined by what we think it is. I see limited reason to see deep learning as more likely than some emergent behaviour from a vast number of simple rules. Like animals flocking together in a boid sim.
Strilanc|6 years ago
Is this the same mistake as in "The Relativity of Wrong" [1]?
> people have thought they understood the Universe at last, and in every century they were proven to be wrong. It follows that the one thing we can say about out modern "knowledge" is that it is wrong.
> [...]
> My answer to him was, "John, when people thought the Earth was flat, they were wrong. When people thought the Earth was spherical, they were wrong. But if you think that thinking the Earth is spherical is just as wrong as thinking the Earth is flat, then your view is wronger than both of them put together."
Modelling the brain as a bunch of pistons or as a complicated machine or clockwork thing is a lot better than as a magical clay golem or opaque soul. Modelling it as a computer is even better than that. Not a computer in the sense of an x86 desktop exactly, of course, but the concept of computation is clearly fundamental to understanding the system. Similarly, the brain is not ResNet but concepts like backpropagation are probably useful.
So, sure, maybe people have been using the latest fad to explain the brain forever. But that's only bad to the extent that the latest fad is getting further away instead of closer.
1: http://hermiene.net/essays-trans/relativity_of_wrong.html
mcilai|6 years ago
ImaTigger|6 years ago