(no title)
Micaiah_Chang | 11 years ago
But for the "off" switch question specifically, a superintelligence could also have "persuasion" and "salesmanship" as an ability. It could start saying things like "wait no, that's actually Russia that's creating that massive botnet, you should do something about them", or "you know that cancer cure you've been looking for for your child? I may be a cat picture AI but if I had access to the internet I would be able to find a solution in a month instead of a year and save her".
At least from my naive perspective, once it has access to the internet it gains the ability to become highly decentralized, in which case the "off" switch becomes much more difficult to hit.
tptacek|11 years ago
But it doesn't take a deep appreciation for the dangers of artificial intelligence to see that. You can just understand the concept of a software bug to know why you want humans in the observe/decide/act loop of critical systems.
So there must be more to it than that, right? It can't just be "be careful about AI, you don't want it controlling all the airplanes at once".
Micaiah_Chang|11 years ago
The fear is that maybe there's no such thing as a "superintelligence proof" system, when the human component is no longer secure.
Note that I don't completely buy into the threat of superintelligence either, but on a different issue. I do believe that it is a problem worthy of consideration, but I think recursive self-improvement is more likely to be on manageable time scales, or at least on time scales slow enough that we can begin substantially ramping up worries about it before it's likely.
Edit: Ah! I see your point about circularity now.
Most of the vectors of attack I've been naming are the more obvious ones. But the fear is that, for a superintelligent being perhaps anything is a vector. Perhaps it can manufacture nanobots independent of a biolab (do we somehow have universal surveillance of every possible place that has proteins?), perhaps it uses mundane household tools to macguyver up a robot army (do we ban all household tools?). Yes, in some sense it's an argument from ignorance, but I find it implausible that every attack vector has been covered.
Also, there are two separate points I want to make, first of all, there's going to be a difference between 'secure enough to defend against human attacks' and 'secure enough to defend against superintelligent attacks'. You are right in that the former is important, but it's not so clear to me that the latter is achievable, or that it wouldn't be cheaper to investigate AI safety rather than upgrade everything from human secure to super AI secure.
tedunangst|11 years ago
WillNotDownvote|11 years ago
Then you can see how it gains 'control', in the senses that control matters anyway, without us necessarily even realizing it, or objecting if we do.