(no title)
jimfleming | 7 years ago
That's a stretch considering AGI has not yet been created by DeepMind or anyone else. Notable is that DeepMind's most prominent successes have relied heavily on MCTS, a classical planning method that doesn't have much relation to neuroscience without a lot of caveats. Their accomplishments on Atari lean a lot more on efficient computing than biological plausibility.
I think the strategy they're actually following (and I believe they've said this more recently) is to use what works and to look to neuroscience when other methods fail. This feels more solid than looking to the brain first to narrow the search space, which is the approach Numenta has taken, and does not scale as easily.
xiphias2|7 years ago
Efficient computing is of course needed for AGI, I think that was never a question. The question is what algorithms to use on the computers, and also what computing architectures should be created for those algorithms.
Those Atari simulations were the first ones putting together reinforcement learning and deep convolutional networks AFAIK, and yes, tree search was needed (which human brains are consciously doing, but extremely bad at compared to computers).
Just looking at what works is not enough. There was a strong reason why DeepMind didn't start with modelling language or logical reasoning, like many other people, and the background was based in biology (animal behaviour).
jimfleming|7 years ago
EDIT: Richard Sutton (largely credited as the grandfather of RL) has written about this recently: http://incompleteideas.net/IncIdeas/BitterLesson.html