top | item 14807818

This famous roboticist doesn’t think Elon Musk understands AI

28 points| ehudla | 8 years ago |techcrunch.com

21 comments

order
[+] tbabb|8 years ago|reply
This guy is smart, and dead on:

- People who don't understand AI are afraid of it; those who do know how fragile and limited it is.

- The call for "regulation" of technology that doesn't exist is too vague to be useful.

- It's the AI in self-driving cars which has the most potential to immediately kill/save thousands of people, and it's telling that it's not this technology that Elon seems to be calling to be regulated. Whether or not regulation is the right thing to do, any argument for/against regulation of self-driving cars could be applied just the same to a hypothetical super AI, but the former is tied to real, practical problems which exist today.

Brooks clearly knows what's up.

Also, to add my own commentary:

- The dystopian robot future we should all be afraid of is not the [paperclip maximizer](https://wiki.lesswrong.com/wiki/Paperclip_maximizer) Musk and friends wave their arms about, but marketing/business algorithms that have ripple effects at the scale of societies-- the Facebook, YouTube, and Google ranking algorithms are examples of this. We could shortly be in a place where large scale human behavior is shaped by algorithms with more data and insight about collective human behavior than any single human could have, and it will be used to optimize for money making instead of stability, fairness, or cultural values. Some society-shaping decisions/policies could even be made without any human awareness of the reasoning behind them. This is not less scary if they're being made by fragile/flaky algorithms.

[+] Veedrac|8 years ago|reply
"Fragile and limited" does not mean safe. It is a strange idea that AI would need to be any of robust, general or complete to be risky. If anything, the main take-away from today's AI is that many problems that previously looked hard turned out to be solvable at superhuman levels with such unsophisticated machinery. This realisation should be extrapolated.

The call for regulation also need not be specific to be useful, though I suspect it would help. The greatest hurdle with AI risk is getting acceptance that this is a problem that needs to be dealt with ahead-of-time. Even if Elon does nothing but improve awareness, he has been useful.

Self-driving cars are actually the least in need of extra regulation. They are already regulated, their effects are observable, there is market pressure for them to perform along lines beneficial to humanity, there is little to no incentive to extend their AI to anything more general, etc. My expectation is that the AI that is most dangerous is the AI developed behind closed doors for purely private interests on broad domains.

I personally dislike the paperclip maximizer analogy; although it serves as a meaningful explanation of the AI alignment problem, people take the literal meaning too seriously, and the absurdity of it discredits the actual risk.

[+] TheOtherHobbes|8 years ago|reply
Exactly. Our lack of awareness of political, social, and economic consequences is far more of a problem than any hypothetical paperclip demon.

AI isn't terrifying.

AI built with our current values is an horrific prospect.

[+] Aron|8 years ago|reply
To me, Elon is playing catch-up and thrown his hat in the Yudkowsky club although Bostrom and other more credentialled people were probably his vector into it. They were talking about this stuff 10-20 years ago. I haven't yet seen anything where Elon moves the ball forward conceptually, although he's a doer so he's not messing around with writing futurism documents and nitpicking details of rationality that almost no one is actually capable of implementing.

On the other hand, Brooks doesn't show any indication he knows what Musk is talking about and throws out a bad summary of his position. I got nothing from this article except a slightly lower respect for Brooks.

The real minds to watch IMO is the Hinton + Deep Mind crew, and I think Yudkowsky and the fearmongers are largely correct, or at least correct enough to be taken seriously. I don't think people following the meme 'real AI researchers know that AI is limited and fragile' are on the right track. So that's my bias.

[+] latently|8 years ago|reply
"On the other hand, Brooks doesn't show any indication he knows what Musk is talking about"

Not quite true: "Tell me, what behavior do you want to change, Elon?"

[+] chmaynard|8 years ago|reply
This is getting absurd. An interview with Dr. Rodney Brooks, one of the great minds working in CS and robotics, has to spend time rebutting uninformed claims and fear mongering about AI research. There is so much Brooks can teach us. I look forward to reading his book.
[+] natch|8 years ago|reply
People who think they understand AI don't understand AI. Which I think is a big part of Elon's point. So criticizing Elon this way is rich.
[+] Houshalter|8 years ago|reply
And Brooks doesn't understand Musk. He's not saying current AI is a threat. He's talking about the very long term future. What AI will be like in 30 years, or even further.

Its inevitable we will eventually solve AI. And when that day comes it will be dangerous. How easy do you think it is to control a being thousands of times smarter than you? If it was invented today we would have no ability to control it. Our best AI control mechanisms are just pressing a button to reward or punish it for it's behavior. You can't imagine any way that would fail?

Our slightly larger brains made the difference between swinging in trees and walking on the moon. But we are only the very first intelligence to evolve. Its unlikely we are anywhere near the peak of what is possible.

And this will likely happen in our lifetimes. The median expected date estimated by AI researchers is in the 2040s. Sure they can't possibly predict it very well, but who else can? And there is something to the wisdom of crowds.

[+] cbames89|8 years ago|reply
Why is super-human general AI inevitable?

Do you have references for this 2040 date? I'd love to see who's making this prediction.

[+] cs2818|8 years ago|reply
Really glad to hear this perspective.

Over the past seven years most of my time has been spent in robotics research labs, and I really struggle to reconcile the state of research with the concerns of those like Elon Musk. I think a series of discussions between the major figures on each side of this would be really valuable.

[+] enkiv2|8 years ago|reply
It's kind of amazing that we're at a point where "Rodney Brooks understands more about AI than Elon Musk" is news, but here we are. The power of PR is incredible.
[+] borplk|8 years ago|reply
"AI" is the "flying cars" of our generation.