top | item 11178055

Artificial intelligence: Ten things you need to understand

25 points| ehudla | 10 years ago |alphr.com | reply

20 comments

order
[+] npalli|10 years ago|reply
The "breakthrough" AI of today is deep learning on massive amounts on data applied to two areas - speech/NLP and vision. How this generalizes to a super intelligence that can take over the planet is so strange. Does a child need to look at billions of images to figure out what a chair or cat is? Will this AI figure out how to select a good business partner?

The problem is that you ask someone who is good in one field (say electric cars or theoretical physics) to opine on something like AI. The correct response is to say that you don't know anything about AI. But, the ego of being an public intellectual prevents that. So, what is the safest option to not seeming dumb - say something like we need to make sure safeguards are place to protect AI from becoming dangerous and kill everyone.

Meanwhile people who actually build these systems know that these systems not are generalizable to variety of tasks (like humans) and they are not intelligent. Best case, they augment humans in their tasks.

[+] _vk_|10 years ago|reply
>Does a child need to look at billions of images to figure out what a chair or cat is?

Of course! Not exclusively images of cats or chairs, but children have absolutely seen billions of images by the time they start to exhibit discernibly human-level intelligence.

[+] rl3|10 years ago|reply
>The "breakthrough" AI of today is deep learning on massive amounts on data applied to two areas - speech/NLP and vision.

What of IBM's Watson? It's one of the premier AI projects in the world, its specialty is neither of those two areas, and it utilizes deep learning on massive amounts of data.

>How this generalizes to a super intelligence that can take over the planet is so strange.

No one's really saying it does, just that recent progress in AI has accelerated such that it seems probable that even more progress is imminent.

>The correct response is to say that you don't know anything about AI.

I really tire of this constant appeal to authority, it's arrogant at best. By that same logic, Nick Bostrom shouldn't have even written his latest book due to a lack of technical knowledge.

[+] PaulHoule|10 years ago|reply
Other things are going on too they are just not so faddish.
[+] PaulHoule|10 years ago|reply
I don't agree with the definition of "evil" used in this article.

Eichmann, for instance, didn't kill the Jews because he was "wicked", he did it because he was following orders. That's evil enough and he hung for it.

A while back we wargamed the idea of an "Evil Teddy Ruxpin" that would want to harm you with all its might but wouldn't have much might so it wouldn't be dangerous. It might be fun to battle with, but we figured it wouldn't be safe because it could always start a fire.

[+] jkoschei|10 years ago|reply
Okay, I'll bite. I'm not particularly well-versed in AI issues, but this article is of the end-is-nigh variety and HN tends to be a technologically optimistic community, so I'm hoping someone can debunk this and give us reason to be optimistic rather than terrified of our future as human batteries in The Matrix.

Anyone?

[+] robotkilla|10 years ago|reply
> rather than terrified of our future as human batteries in The Matrix

Humans make terrible batteries, therefore its more likely we'll just be exterminated.

[+] gbhn|10 years ago|reply
How about this:

For the hundred thousandth time, a generation of humans will be confronted with the necessity of incorporating into their society a group of beings which they created and which they love, hate, fear, trust, and, most of all, barely understand. For a million years, that group was "their own biological children," but over the next few decades, that group might also come to include "their mental children: AIs."

In other words, we're damn good at this. We'll make it. :-)

[+] bainsfather|10 years ago|reply
I do not think we get AI in the near future. What we have at the moment are more like 'Intelligence Amplification' tools for humans to use/direct.

But there are 3 concerns:

(1) Maybe, although the chance is small, it is so bad that prob x payoff is large enough that we ought to worry/think about it?

(2) If it happens that jobs lost to AI are not replaced by other work (hard to say) then we have unemployment & social problems.

(3) The current deep learning breakthroughs in image recognition, speech recognition, etc, make it much easier to process all that surveillance data that is being gathered. When surveillance tools, and e.g. drones as well, can be controlled by small numbers of humans, you should be worried. Historically, governments have usually required the support of a fair fraction of their populace in order to stay in power. Ordering soldiers to shoot their fellow citizens has always been risky for governments. Soon that might not be the case.

In the past, a nation's power depended on its level of technology, its capital equipment, and the number and skills of its population. There was an incentive to have a skilled, well fed, and content populace.

Maybe a large part of the populace will no longer be 'needed'?

I guess you could counter points 2 & 3 by saying "Yes, but our democratic institutions are strong and our politicians are caring and intelligent - our societies will deal with these changes."

For myself, (3) scares me. You should be afraid of ending up like the Scottish Highlanders turfed out of their homes by Chiefs who replaced them with sheep, or like the cart-horses who were replaced by the internal combustion engine (and were shot). There is no need to fear an AI taking over, it is humans you need to be afraid of.

[+] bjornsing|10 years ago|reply
> 6. Once artificial intelligence gets smarter than humans, we've got very little chance of understanding it

Is that really so...? My gut feeling is that it's probably not. I don't know exactly where this gut feeling comes from, but I think the underlying reasoning goes something like this: Richard Feynman was a hell of a lot smarter than I am, but I can still understand his ideas. Of course an AI could construct incredibly long mathematical proofs, and similar, that no human could verify, but that wouldn't be much like the difference between man and ape. Is there really an entirely different way of understanding the universe out there, one that is radically more productive than ours? I doubt it.

Another way to put it I guess is: I'm simply not sure the marginal utility of (raw) intelligence is that great. In fact I remember once telling my friends that my life would be so much better if I was just a little smarter. It was meant as a joke.

Yet another way to think about it is to ask what's holding back our understanding of the universe. I'd say it's not really "intelligence" at all, but rather "money". Take gravitational waves for instance: Einstein predicted them some hundred years ago(!) and they where only detected just now, after spending I don't know how many millions/billions of dollars...

But either way, this is probably one of the most interesting philosophical questions of our time.

[+] ehudla|10 years ago|reply
Wasn't it Hofstadter who hypothesized that an intelligent system will not have access to its own lower levels? I can't put my hands on the exact quote at the moment. If anyone remembers, I'd appreciate the info.
[+] hacker42|10 years ago|reply
> It's entirely possible that the reason we've never met aliens is because they invented artificial intelligence before they could build spaceships capable of interstellar travel, and that discovery caused their extinction.

This is really not so clear because it would require AIs to never invent space-travel, measurable large-scale structures or signals, which seems unlikely (assuming these sort of things are possible in the fist place). If astronomical evidence of these sorts is physically impossible, then there is no need for explaining the Great Filter with AI in the first place.