I mean, he was right … for what we knew at that time. He predicted correctly that the only way to achieve a general intelligence it would require to mimic the extremely complex neural networks in our brain that the hardware of the time was very far away from achieving. He could not predict that things would move so fast on the hardware side (nobody could have)that made this somewhat possible. We are atill I would argue a bit out in having the appropriate computer power to make this a reality still, but it now is much more obvious that it is possible if we continue on this path
lm28469|2 years ago
Besides naming neutral networks and human brains don't have that much in common
frozenseven|2 years ago
abecedarius|2 years ago
Hans Moravec at McCarthy's lab in roughly this timeframe (the 70s) wrote about this then -- you can find the seed of his 80s/90s books in text files in the SAIL archive https://saildart.org/HPM (I'm not going to look for them again). Easier to find: https://web.archive.org/web/20060615031852/http://transhuman...
(Same McCarthy as in this debate.)
Gordon Moore made up Moore's Law in 1965 and reaffirmed it in 1975.
mistrial9|2 years ago
secondly, people selling things and people banding together behind one-way mirrors have a lot of incentive to devolve into smoke-and-mirrors.
Predicting is a social grandstand in a way, as well as insight. Lots of ordinary research has insight without grandstanding.. so this is a media item as much as it is real investigation IMHO
perfobotto|2 years ago
pixl97|2 years ago
I mean, this is what evolution does too. The variants that 'looked right' but were not fit to survive got weeded out. The variants that were wrong but didn't negatively affect fitness to the point of non-reproduction stayed around. Looking right and being right are not significantly different in this case.
YeGoblynQueenne|2 years ago
It is just as clear that there is one more ability that human brains have, than the ability to learn from observations, and that's the ability to reason from what is already known, without training on any more observations. That is how we can deal with novel situations that we have never experienced before. Without this ability, a system is forever doomed to be trapped in the proximal consequences of what it has observed.
And it is just as clear that neural nets are completely incapable of doing anything remotely like reasoning, much as the people in the neural nets community keep trying, and trying. The branch of AI that Lighthill almost dealt a lethal blow to (his idiotic report brought about the first AI winter), the branch of AI inaugurated and championed by McCarthy, Michie, Simon and Newell, Shannon, and others, is thankfully still going and still studying the subject of reasoning- and making plenty of progress, while flying under the hype.