top | item 39502910

(no title)

perfobotto | 2 years ago

I mean, he was right … for what we knew at that time. He predicted correctly that the only way to achieve a general intelligence it would require to mimic the extremely complex neural networks in our brain that the hardware of the time was very far away from achieving. He could not predict that things would move so fast on the hardware side (nobody could have)that made this somewhat possible. We are atill I would argue a bit out in having the appropriate computer power to make this a reality still, but it now is much more obvious that it is possible if we continue on this path

discuss

order

lm28469|2 years ago

> He predicted correctly that the only way to achieve a general intelligence it would require to mimic the extremely complex neural networks in our brain

Besides naming neutral networks and human brains don't have that much in common

frozenseven|2 years ago

Most of the relevant similarities are there. Every plausible model in computational neuroscience is based on neural nets or a close approximation thereof, everything else is either magic or a complete non-starter.

mistrial9|2 years ago

ok - except detailed webs of statistical probabilities only emits things that "look right" .. not at all the idea of General Artificial Intelligence.

secondly, people selling things and people banding together behind one-way mirrors have a lot of incentive to devolve into smoke-and-mirrors.

Predicting is a social grandstand in a way, as well as insight. Lots of ordinary research has insight without grandstanding.. so this is a media item as much as it is real investigation IMHO

perfobotto|2 years ago

To be honest restricting funding to the kind of symbolic based AI research that is criticized in this discussion might have helped AI more than it hurt , by eventually pivoting the research toward neural networks and backpropagation. I don’t know how much of a good thing would have been if this kind of research continued to be funded fully.

pixl97|2 years ago

>except detailed webs of statistical probabilities only emits things that "look right" .. not at all the idea of General Artificial Intelligence.

I mean, this is what evolution does too. The variants that 'looked right' but were not fit to survive got weeded out. The variants that were wrong but didn't negatively affect fitness to the point of non-reproduction stayed around. Looking right and being right are not significantly different in this case.

YeGoblynQueenne|2 years ago

What makes it "much more obvious that it is possible" to simulate the human brain? If you're thinking of artificial neural nets, those clearly have nothing to do with human intelligence, which was very obviously not learned by training on millions of examples of human intelligence; that would have been a complete non-starter. But that's all that artificial neural nets can do, learn from examples of the outputs of human intelligence.

It is just as clear that there is one more ability that human brains have, than the ability to learn from observations, and that's the ability to reason from what is already known, without training on any more observations. That is how we can deal with novel situations that we have never experienced before. Without this ability, a system is forever doomed to be trapped in the proximal consequences of what it has observed.

And it is just as clear that neural nets are completely incapable of doing anything remotely like reasoning, much as the people in the neural nets community keep trying, and trying. The branch of AI that Lighthill almost dealt a lethal blow to (his idiotic report brought about the first AI winter), the branch of AI inaugurated and championed by McCarthy, Michie, Simon and Newell, Shannon, and others, is thankfully still going and still studying the subject of reasoning- and making plenty of progress, while flying under the hype.