He was right a few years ago. But now there are several groups who have developed efficient capable online learning systems that don't require much data or iteration. When these and other existing types of cutting edge neural network advances such as techniques for avoiding catastrophic forgetting are combined with incremental training in diverse environments with general inputs and outputs, I believe we will see general purpose intelligence.
I believe we will see some demonstrations of AGI in the next two years. At first they will likely be general but unimpressive and not really as capable as animals or humans, and so people will dismiss them. But quickly the capabilities demonstrated will increase and before 2023-2024 there will likely be consensus that it has been achieved.
Look at systems like this one https://github.com/ogmacorp/EOgmaNeo. It's a whole other type of NN that Kasparov and others aren't even aware of.
Do you believe that artificial intelligence will be capable of deciding, given an algorithm and a set of inputs, whether the algorithm will finish running?
Back at the time when Deep Blue won chess match against Kasparov everyone in the media said about superior intelligence of Deep Blue.
While I at that time clearly realized that IBM just built brute-force "bulldozer" which can look for 200 million positions per second. Even with that power it had only a slight advantage over Kasparov who can look at only a handful of positions per second.
Now, we have another generation of "intelligent" machines based on deep learning but I see this as just upgraded version of brute-force "bulldozers". Now, it takes hundreds of millions of samples to infer the rules which human can infer from only a thousand or even less samples.
So I would call truly intelligent machine which can learn to play chess or go looking/playing only to a few thousands examples and calculating only a few moves ahead and not more than a few moves per second. Obviously that machine would beat human intelligence completely.
Although, such machine still may not have self-consciousness with qualia but this yet another big challenge.
Are you sure that human thought isn't basically brute-force bulldozing? Just because we don't feel like it is doesn't mean it is.
The time it takes us to learn something, the number of times we have to see/experience it could be akin to bulldozing couldn't it?
There are a lotttt of neurons in our brains that are constantly going off, perhaps comparable to the amount of transistors in a deep learning gpu if you account for the training time difference
Human chess players learn from each other. What might look at first like learning from a small sample is really a great deal of knowledge transferred via a small sample. Millions of people have played billions of chess games combined. We're learning by parallel Monte Carlo simulation.
One thing I'd be interested to learn is, how much of what makes the difference between an above average chess player and a Master or a Grandmaster can be tied to better decision making after looking 3 or 5 moves ahead, and how much is the Master/Grandmaster's ability to look 10+ moves ahead?
The looking 10 or even just 5 moves ahead thing is overstated and this is not actually how it works most of the time. Most GMs only calculate that far in the endgame. Before that, often looking 2 or 3 moves ahead is sufficient based on strategic elements or opening theory (which can't easily be understood by 'looking moves ahead'; they're things like, "this pawn is passed" or "my light squares will become very weak" which are can be substitutes for looking 30+ moves ahead).
Often positions resemble historic or previous games, so pattern recognition here and the themes (e.g., "this particular structure will make it easier to get my rook on the 7th rank at some point") of the old game are important.
In fact, Capablanca, a former World Champion and endgame expert has a famous quote claiming to only look 1 move ahead.
It has already been two decades. We’re suppose to be three decades from the singularity. Personally, it doesn’t feel like we’re accelerating towards an AI that surpasses humans, in general.
ilaksh|8 years ago
I believe we will see some demonstrations of AGI in the next two years. At first they will likely be general but unimpressive and not really as capable as animals or humans, and so people will dismiss them. But quickly the capabilities demonstrated will increase and before 2023-2024 there will likely be consensus that it has been achieved.
Look at systems like this one https://github.com/ogmacorp/EOgmaNeo. It's a whole other type of NN that Kasparov and others aren't even aware of.
colorint|8 years ago
hal9000xp|8 years ago
While I at that time clearly realized that IBM just built brute-force "bulldozer" which can look for 200 million positions per second. Even with that power it had only a slight advantage over Kasparov who can look at only a handful of positions per second.
Now, we have another generation of "intelligent" machines based on deep learning but I see this as just upgraded version of brute-force "bulldozers". Now, it takes hundreds of millions of samples to infer the rules which human can infer from only a thousand or even less samples.
So I would call truly intelligent machine which can learn to play chess or go looking/playing only to a few thousands examples and calculating only a few moves ahead and not more than a few moves per second. Obviously that machine would beat human intelligence completely.
Although, such machine still may not have self-consciousness with qualia but this yet another big challenge.
a1exyz|8 years ago
The time it takes us to learn something, the number of times we have to see/experience it could be akin to bulldozing couldn't it?
There are a lotttt of neurons in our brains that are constantly going off, perhaps comparable to the amount of transistors in a deep learning gpu if you account for the training time difference
xapata|8 years ago
jasonmaydie|8 years ago
maxxxxx|8 years ago
together_us|8 years ago
[0]: https://www.youtube.com/watch?v=zhkTHkIZJEc
pls2halp|8 years ago
pawn|8 years ago
quantdev|8 years ago
Often positions resemble historic or previous games, so pattern recognition here and the themes (e.g., "this particular structure will make it easier to get my rook on the 7th rank at some point") of the old game are important.
In fact, Capablanca, a former World Champion and endgame expert has a famous quote claiming to only look 1 move ahead.
melling|8 years ago
It has already been two decades. We’re suppose to be three decades from the singularity. Personally, it doesn’t feel like we’re accelerating towards an AI that surpasses humans, in general.
jasonmaydie|8 years ago