(no title)
Micaiah_Chang | 11 years ago
There are some humans who are a lot smarter than a lot of other humans. For example, the mathematician Ramanujan could do many complicated infinite sums in his head and instantly factor taxi-cab license plates. von Neumann pioneered many different fields and was considered by many of his already-smart buddies to be the smartest. So we can accept that there are much smarter people.
But are they the SMARTEST possible? Well, probably not. If another person just as smart as von Neumann was born, the additional advancements since his lifetime (the internet, iphones, computer based off of von Neumann's architechture!) can use all of these new inventions to discover even newer things!
Hm, that's interesting. What happens if this hypothetical von Neumann 2.0 begins pioneering a field of genetic engineering techniques and new ways of efficient computation? Then, not only would the next von Neumann get born a lot sooner, but THEY can take advantage of all the new gadgets that 2.0 made. This means that it's possible that being smart can make it easier to be "smarter" in the future.
So you can get smarter right? Big whoop. von Neumann is smarter, but he's not dangerous is he? Well, just because you're smart doesn't mean that you'd be nice. The Unabomber wrote a very complicated and long manifesto before doing bad things. A major terrorist attack in Tokyo was planned by graduates of a fairly prestigious university. Even not counting people who are outright Evil, think of a friend who is super smart but weird. Even if you made him a lot smarter, where he can do anything, would you want him in charge? Maybe not. Maybe he'd spend all day on little boats in bottles. Maybe he'd demand that silicon valley shut down to create awesome pirate riding on dinosaur amusement parks. Point is, Smart != Nice.
We've been talking about people, but really the same points can be applied to AI systems. Except the range of possibilities is even greater for AI systems. Humans are usually about as smart as you and I, nearly everyone can walk, talk and write. AI systems though, can range from being bolted to the ground, to running faster than a human on uneven terrain, can be completely mute to... messing up my really clear orders to find the nearest Costco (Dammit Siri). This also goes for goals. Most people probably want some combination of money/family/things to do/entertainment. AI systems, if they can be said to "want" things would want things like seeing if this is a cat picture or not, beating an opponent at Go or hitting an airplane with a missile.
As hardware and software progresses much faster, we can think of a system which could start off worse than all humans at everything begin to do the von Neumann->von Neumann 2.0 type thing, then become much smarter than the smartest human alive. Being super smart can give it all sorts of advantages. It could be much better at gaining root access to a lot of computers. It could have much better heuristics for solving protein folding problems and get super good at creating vaccines... or bioweapons. Thing is, as a computer, it also gets the advantages of Moore's law, the ability to copy itself and the ability to alter its source code much faster than genetic engineering will. So the "smartest possible computer" could not only be much smarter, much faster than the "smartest possible group of von Neumanns", but also have the advantages of rapid self replication and ready access to important computing infrastructure.
This makes the smartness of the AI into a superpower. But surely beings with superpowers are superheros right? Well, no. Remember, smart != nice.
I mean, take "identifying pictures as cats" as a goal. Imagine that the AI system has a really bad addiction problem to that. What would it do in order to find achieve it? Anything. Take over human factories and turn them into cat picture manufacturing? Sure. Poison the humans who try to stop this from happening? Yeah, they're stopping it from getting its fix. But this all seems so ad hoc why should the AI immediately take over some factories to do that, when it can just bide its time a little bit, kill ALL the humans and be unmolested for all time?
That's the main problem. Future AIs are likely to be much smarter than us and probably much more different than us.
Let me know if there is anything unclear here. If you're interested in a much more rigorous treatment of the topic, I totally recommend buying Superintelligence.
http://www.amazon.com/Superintelligence-Dangers-Strategies-N... (This is a referral link.)
[0] Part 1 of 2 here: http://waitbutwhy.com/2015/01/artificial-intelligence-revolu...
Edit: Fix formatting problems.
JoeAltmaier|11 years ago
I'd say, AI is dangerous because we cannot fathom its motivation. To give us true answers to questions? To give pleasing answers? To give answers that help it survive?
The last is inevitably the AIs we will have. Because if there are more than one of them, folks will have a choice, and they will choose the one that convinces them its the best. Thus their answers will be entirely slanted toward appearing useful enough to be propagated. Like a meme, or a virus.
Micaiah_Chang|11 years ago
But are you denying that there exists some factor which allows you to manipulate the world in some way, roughly proportional to the time that you have? If something can manipulate the world on timescales much faster than humans can react to, what makes you think that humans would have a choice?
unprepare|11 years ago
Honestly asking, why would they? I dont see the obvious answer
>Imagine that the AI system has a really bad addiction problem to that.
Again, i just don't get this. How would an AI get addicted? Why wouldn't it research addiction and fix itself to no longer be addicted? That is behavior i would expect from an intelligence greater than our own, rather than indulgence
>Take over human factories and turn them into cat picture manufacturing?
Why in the world would it do this? Why wouldn't it just generate digital images of cats on its own?
Really interesting post, thanks!
dragonwriter|11 years ago
Why wouldn't a natural intelligence with an addiction do that?
Micaiah_Chang|11 years ago
So, your intuition is right in a sense and wrong in a sense.
You are right in that AI systems probably won't really have the "emotion of wanting", why would it just happen to have this emotion, when you can imagine plenty of minds without it.
However, if we want an AI system to be autonomous, we're going to have to give it a goal, such as "maximize this objective function", or something along those lines. Even if we don't explicitly write in a goal, an AI has to interact with the real world, and thus would have to affect it. Imagine an AI who is just a giant glorified calculator, but who is allowed to purchase its own AWS instances. At some point, it may realize that "oh, if I use those AWS instances to start simulating this thing and sending out these signals, I get more money to purchase more AWS!". Notice at no point was this hypothetical AI explicitly given a goal, but it nevertheless started exhibiting "goallike" behavior.
I'm not saying that an AI would get an "addiction" that way, but it suggests that anything smart is hard to predict, and that getting their goals "right" in the first place is much better than leaving it up to chance.
> How would an AI get addicted? Why wouldn't it research addiction and fix itself to no longer be addicted? That is behavior i would expect from an intelligence greater than our own, rather than indulgence
This is my bad for using such a loaded term. By "addiction" I mean that the AI "wants" something, and it finds that humans are inadequate to give it to them. Which leads me to...
> Why in the world would it do this? Why wouldn't it just generate digital images of cats on its own?
Because you humans have all of these wasteful and stupid desires such as "happiness", "peace" and "love" and so have factories that produce video games, iphones and chocolate. Sure I may have the entire internet already producing cat pictures as fast as its processors could run, but imagine if I could make the internet 100 times bigger by destroying all non-computer things and turning them into cat cloning vats, cat camera factories and hardware chips optimized for detecting cats?
Analogously, imagine you were an ant. You could mount all sorts of convincing arguments about how humans already have all the aphids they want, about how they already have perfectly functional houses, but you, as a human, would still pave over billions of ant colonies for shaving 20 minutes off a commute. It's not that we're being intentionally wasteful and conquering of the ants. We just don't care about them and we're much more powerful than them.
Hence the AI safety risk is: By default an AI doesn't care about us, and will use our resources for whatever it wants, so we better create a version which does care about us.
Also cross thread, you mentioned that organic intelligences have many multi-dimensional goals. The reason why AI goals could be very weird is that it doesn't have to be organic; it could have an only one dimensional goal, such as cat picture. It could have similar dimension goals but be completely different, like the perverse desire to maximize the number of divorces in the universe.