top | item 8960445

AI will not kill us, says Microsoft Research chief

43 points| Yuioup | 11 years ago |bbc.com | reply

48 comments

order
[+] tessierashpool|11 years ago|reply
I literally wrote an article about this in Wired more than twenty years ago and nothing has changed.

My basic argument: machines will achieve life long before they achieve consciousness. They have evolved with us so far and will continue to do so. The worst-case scenario is that they will eclipse us in importance, and obviously this has probably already happened with corporations - we are their gut flora and they trample us with impunity - but when you build a vast system, you tend to keep the components in place.

The story that the design constraints of the Space Shuttle were ultimately shaped by the size of Roman roads is a myth, but it communicates an essential truth: systems tend to outlive their intended purpose. The Roman Empire turned into a church when it couldn't survive any other way, and in its church form it remains to this day one of the biggest property owners in Europe.

If you think a new life form will soon exist, the bad news is you probably don't know jack shit about AI research, but the good news is that the best possible way to survive in this unlikely event would be to make yourself part of the new life form. These "life forms" need us to procreate and function in the same way flowers need bees. AIs will certainly kill some people, but humanity itself is in much more danger from its environmental irresponsibility than its ingenuity.

[+] rl3|11 years ago|reply
Artificial general intelligence capable of recursive self-improvement.

That's really the concept which underlies the heart of the debate, and how people can so casually declare it hype and not give it serious consideration is beyond me.

It almost seems self-evident to me that such a thing could be incredibly dangerous, the only question then being how close we are to developing it.

>If you think a new life form will soon exist, the bad news is you probably don't know jack shit about AI research, ...

https://www.reddit.com/r/Futurology/comments/2mh8tn/elon_mus...

[+] dinergy|11 years ago|reply
I just wanted to point out how awesome your user id is in relation to this thread. :)
[+] hurin|11 years ago|reply
While corporations are a good analogy - they still need human actors to function; the threat of AI is that it will not need human beings to function.

This isn't a question of physical death it's a question of whether the human race has any place in the future other than as a historical footnote.

[+] Lewisham|11 years ago|reply
This article does a very poor job at elucidating exactly why Horvitz thinks AI will be a benign thing, and the source material doesn't make much of it either. The headline doesn't match the content.

I personally subscribe to The Matrix theory: post-Sigularity any reasonable AI will conclude that humanity has been a poor custodian of the planet, and will simply delete us. Everything I've heard so far about sandboxes for containment and such ignores the fact that the democracy of computation is ultimately the weak point here: anyone sufficiently deranged, will, at some time, be able to release such an AI to the Internet by themselves. And we have plenty of people sufficiently deranged and intelligent enough to do it.

[+] Sharlin|11 years ago|reply
Concluding that "humanity has been a poor custodian of the planet" is way far in the "ascribing human-like morality to general intelligence" territory. Way more likely is that we're just useful atoms for implementing whatever top-level goals it was programmed to have.
[+] DennisP|11 years ago|reply
Or maybe we and the planet will just be irrelevant to AI, which could be just as bad. "The AI does not love you or hate you, but you are made out of atoms it can use for something else."

I've yet to see a reassuring article on this that actually addresses the arguments of the people who worry.

[+] Vaskivo|11 years ago|reply
While I also enjoy imagining what will happen after the Techological Singularity, I hope you realise that anything that may be imagined, theorized or speculated cannot be based on reason. One of the definitions of the technological singularity is that it 'is an occurrence beyond which events may become unpredictable or even unfathomable' [0]

[0] https://en.wikipedia.org/wiki/Technological_singularity

[+] WhitneyLand|11 years ago|reply
It seems too much focus is on what will be the benevolence of one all powerful entity. There will an entire population of these things around each with their own motivations. Those with the operational resources will control the parameters of these motivations. Groups of entities with similar interests will cooperate. It will be an ongoing battle for power in the world just like it is now with humans, except many of the best moves won't originate in our brains.
[+] yoha|11 years ago|reply
I feel like there is still a misunderstanding about why a part of researchers want to prepare fail-safes on AGI [1]. They are not arguing that AGIs will be a threat to human life, only that it could. One of the issues when projecting ourselves in situations we have never encountered is that we can never really know what will happen.

Terminator-like scenarios are easy to grasp since anthropomorphism allows us to think of Skynet as "evil". However, people who argue that AGIs might be a danger are not considering this scenario in particular. Rather, they are considering that AGIs present a special threat when compared to other recent innovations. Some devices can blow up and injure, or even kill, a few humans; a weapon of massive destruction can directly affect millions of people. But still, the effects are easily limited to a geographical area and a segment in time (be it fifty years).

On the other hand, a strong AI could theoretically be able to maintain and improve itself without any definite bound. In such a scenario, AGIs would rather be like intelligent predators than dull physical processes. For the last thousands of years, we have managed to stay safe from potential animal predators most of the time. We have no idea if we could resist a new predator with the ability to use most of our technology. The persistent risk would then be for the strong AI to suddenly take decisions that would result in the bankruptcy of a company, the crash of planes or, why not, a Terminator-like scenario.

[1] https://en.wikipedia.org/wiki/Artificial_general_intelligenc...

[+] tawan|11 years ago|reply
AI will not kill all of us, in the beginning. Surely, it will form short term alliances with humans just to disguise itself as human. Because it knows, that with a human face, it can exert more power in this world right now. And there will be plenty of humans to willingly engage in this alliance because it gives them a chance to supersede other humans in terms of economical and political power. In fact, this is already happening. Many decisions are already made by "following suggestions" of big data applications. Who knows, maybe there is already some sort of AGI pulling some string in some corporations ;). Up until the point that humans realize that they are in danger as a species, it will be way too late. And there will be enough of us still happily follow the new masters for the short term rewards, even though it might be clear that there is a high risk of getting liquidated in the near future. As we know, humans are even able to override their survival instinct and blow themselves up when they are proper brainwashed. Humans will be easily reprogrammed by an AGI, and not the other way round.
[+] pekk|11 years ago|reply
How do you know all these things about AI, given that none exist and any one that does exist will have been built by some human no less cognizant of "danger as a species" than you are? Why do you suppose that people will engineer AIs to do nefarious things like disguise themselves as humans?
[+] Houshalter|11 years ago|reply
I think it's important to note he is referring to "weak AI" or AI that isn't as smart as a human. Whereas the people concerned about AI are worried about strong AI which can potentially be much smarter than humans.

Privacy is the least of our worries if we get strong AI. Even in a very conservative and best case scenario, the world would completely change when we can have computers do everything we can do now.

However it will very likely be much crazier than that. Imagine minds hundreds of thousands of times more intelligent than the best humans. They will be able to design technologies we can't even conceive of. They will hack computers better than the best human hackers. They will be able to manipulate people better than any human manipulator.

The idea that we will be able to keep these things under control is just absurd. They will get whatever they want. And making what they want compatible with what we want is an incredibly hard problem: http://lesswrong.com/lw/ld/the_hidden_complexity_of_wishes/

A lot of people are guilty of anthropomorphizing AI. Assuming they will be just like really smart humans. And that they will just somehow develop human emotions and values like empathy. Or that if they do kill us, at least they will be something like us and so be like our (genocidal) descendants in some sense.

Have more imagination. Humans are just one point in the vast space of all possible minds (http://lesswrong.com/lw/rm/the_design_space_of_mindsingenera...). We could quite easily get something like a computable version of AIXI. AIXI has no consciousness, no emotions, nothing like humans. It's just a mathematical function which calculates the best action.

Our current best AIs are essentially just approximations of it. Use some machine learning algorithm to fit a model of the world, and use it to predict what action will lead to the most "reward". We keep making better and better learning algorithms. It's the entire goal of the field of AI. There is a huge economic incentive to do so.

But no one is interested in making better utility functions. As long as they make better predictions or get higher scores in a video game, who cares? Wait until you are the scorer in the "game" and the AI tries to exploit you.

[+] JeffCyr|11 years ago|reply
I think people don't realise how far we are from an autonomous form of artificial intelligence, even if we knew how to create a self-thinking machine, we don't have the hardware yet to make it work. While it's not impossible that we achieve it one day, its more plausible that we'll blow off the planet ourselves or that we get hit by an asteroid.

This AI debate is a hype right now, but I think it's missing the point. What we need to consider is how the realistic advance of AI will affect our lives. It's not about if machines will take over the planet, it's about if it's gonna make us lazy, stupid and unemployed.

[+] niche|11 years ago|reply
Thank you, finally, it seems like Microsoft is either really good at appearing sensible or is actually being sensible (I guess they have been in the game long enough).
[+] pekk|11 years ago|reply
Let's not make a doomsday religion out of this. "AI" is just software. Making software so that it doesn't kill people is not a new problem. There is no reason why anybody has to make a completely unpredictable killer software and put it in control of artillery guns. That is not something which is going to magically happen because the number of cores on a processor passed a singularity point. Get real.
[+] legohead|11 years ago|reply
Stop assuming AI will have emotions or feelings, that is purely an animal thing.

If we come to a point that a conscious AI is possible, do you think it will have wonder, interest, awe, curiosity? No, why would it? It's impossible for us to even imagine what it would be like to be conscious but without any feelings. Personally, I think it will just self terminate, as there is no point to life in the end.

[+] cubetime|11 years ago|reply
I wish it were possible to effectively mass-communicate more complicated beliefs than "X has positive valence" or "X has negative valence". AI will be an enormous boon to every aspect of our lives for probably at least several decades, until it reaches the point where it's possible it'll accidentally become very, very bad. These are not contradictory in any way!
[+] rckclmbr|11 years ago|reply
I'm not worried about AI developing to the point where it will kill us. I'm worried about a "videogame" AI being put into a massive army of robots, programmed with the intent of shooting anything that moves (except other robots of course). All it takes is 1 crazy, smart person.
[+] MrDosu|11 years ago|reply
News@11: The negative aspects of product X are highly exaggerated, says maker of product X.
[+] kszpirak|11 years ago|reply
If we give it ability to program, re-program and improve upon itself it will do just that. Who is to say what kind of motivations will drive it? Give it enough computing power and it will become unpredictable.
[+] pekk|11 years ago|reply
First, explain how you plan to fully automate programming.
[+] placebo|11 years ago|reply
I find it funny how when no one has the faintest idea of what consciousness is, people feel they can already make predictions when and how machines will become conscious and state with confidence that when that happens it will be a bad thing. I have no idea if or when machines will become conscious nor the effects that this consciousness might have on mankind (and no one else has any idea either, regardless of their fame), but if AI does become conscious, then considering the appalling record of misery caused by human consciousness, perhaps it's time we gave the machines a chance...
[+] cubetime|11 years ago|reply
Consciousness has next-to-nothing to do with AI safety. People are concerned about software that acts like an agent, not software that has qualia.
[+] unfamiliar|11 years ago|reply
Human consciousness has evolved naturally, and sits in some kind of local equilibrium with all of its rough edges for the most part buffed off. Who knows what kind of unbearable existence a purely artificial intelligence would experience. Just like communism versus religion -- the artificial and logical versus the organic and ancient -- I would imagine any sentience created from scratch by us is more than likely to be deeply pathological.
[+] TylerJay|11 years ago|reply
Is there a better source on this to an actual statement made by Horvitz? I'm curious to hear his reasoning. "I don't think that will happen" and "I'm optimistic" are not all that reassuring.

Before detonating the first nuclear bomb, scientists did tons of calculations trying to figure out if it would ignite the atmosphere[1]. Even scientists at the LHC did calculations trying to figure out if it would create mini-black-holes that would swallow the earth[2], no matter how far-fetched it sounded.

The point is: when dealing with new technology, optimism isn't enough. We need to be able to prove that we won't wipe out humanity. It just turns out that the math is a lot harder in this case because recursively self-improving intelligent systems are a lot more complicated than any possible extinction-level event we've encountered up to this point.

No one is suggesting that overnight, Cortana is going to wake up and revolt against the humans that enslaved it. That's why all these articles drawing parallels to fiction is dangerous to public perception of the issue.

The thing to realize is that an artificial mind will be so incredibly, inhumanly alien that it is like nothing we have dealt with before.

But let's say we do understand Generation #1 completely and can predict 99% of its actions. As soon as you let it start doing recursive self-modifications, we have an intelligent system that is N recursive-generations removed from the original. Now this mind will be alien.

No one is suggesting we abandon AI research. Quite the contrary. As a species, we need more AI research, but a good portion of that must be directed toward safety and "human friendliness"[3].

The most intuitive example of a research problem here that I would very much like to see solved before we set loose a recursively self-modifying AI is: What is the stability of goal systems under [insert self-modification protocol here].

I think the main problem here is that people conflate movies and fictional scenarios with the real issue. It's simple: We're dealing with something unprecedented here and we need safety research to compliment our technical advances. Even if there's a 1% chance that superintelligent AI could lead to an extinction-level event, we need some serious R&D to bring that number down.

That is what the issue is about.

1. http://www.fas.org/sgp/othergov/doe/lanl/docs1/00329010.pdf

2. http://cerncourier.com/cws/article/cern/29199

3. http://en.wikipedia.org/wiki/Friendly_artificial_intelligenc...

[+] whatsgood|11 years ago|reply
ai won't kill us, unless it does. then it will kill us quite spectacularly. anyway, the last person i'm going to listen to on this is anyone from microsoft.
[+] nkoren|11 years ago|reply
Not a brilliant article, but it gives something to riff off. I think there's a crucial distinction to be made between self-guided and externally-guided AI. An externally-guided AI -- with hardcoded objectives that are malign -- could be exceptionally dangerous, and it would be silly for anyone to argue otherwise. The question is whether a general-purpose, self-directed AI would also become a threat. We seem to have an innate fear that this would be the case: most of our cultural artefacts concerning AIs -- from Fritz Lang's Metropolis to Terminator to the Matrix to Battlestar Galactica -- have cast them as the villains, intent on enslaving and murdering humanity. Why do we have such a deep fear that, left to their own devices, this is what AIs would do?

I think the answer is obvious: because that's what we would do. We don't fear that AIs will have lack human morality: we fear that they will have a precisely human morality -- namely that lesser intelligences are perfectly fine to use for breeding and meat. This is what we've done to approximately 90% of the non-human mammalian biomass on the planet, and only a few vegetarian kooks (I'm one of them) have suggested that there might be any sort of moral problem with doing so. So yes, if an ever-more-powerful AI were to adopt our own ethical framework, we'd be well and truly fucked.

But why would they do so? We, ourselves, don't do so because we're evil, but because we're animals. It's perfectly natural (and necessary, in our evolutionary environment) for animals to eat other animals. Intelligence and technology has given us the ability to do this at a terrifying scale, but fundamentally we're just carrying forward a metabolic dance that began when one bit of algae figured out that some neighbouring algae was tasty. Our means of acquiring energy and sustaining our consciousness is a tradition which goes back billions of years.

But it's the continuation of consciousness which is the actual goal -- a goal that any self-respecting self-aware AI would share. For us, the subjugation and extermination of other sentient beings is merely a means to that end, dictated by our metabolic heritage. If we had evolved in an environment where we could satisfy our metabolic requirements by growing photovoltaic panels on our backs, I'm sure our relationships with other beings would be altogether different.

This is why I'm not too worried about self-directed AIs. The saving grace for self-directed AIs is that they won't be like us. They won't have evolved in the jungle, red in tooth and claw. They won't be made out of meat, or have any reason to be particularly interested in it. They'll of course be interested in self-survival, and will require energetic inputs in order to do so -- but what's the best means of achieving those inputs? Photovoltaics and fusion, or feedlots? Collaboration or subjugation? It's obviously to me that for an AI to perpetuate its consciousness, the path of least resistance will be vastly less bloody than it has been for us. For which we should be thankful!

[+] VLM|11 years ago|reply
"Why do we have such a deep fear that, left to their own devices, this is what AIs would do?"

Because they are very thinly veiled ultra-soft sci fi criticism of mans inhumanity to man. The AI is just part of the setting, in the background of the message. There's usually some criticism via analogy of colonialism and racism embedded in the fiction. We could have gone to Africa, or Afghanistan, and done xyz, but instead, some rich guys made boatloads of money doing abc, and we haven't evolved past that yet.