(no title)
Micaiah_Chang | 11 years ago
Security for human level threats is already very poor, and we already have a relatively good threat model. If you suppose that an AI could have radically different incentives and vectors than than a human, it seems implausible that you could be secure, even in practice. I suppose you could say that these would be implemented in time, but it's not at all clear to me that a humanity which has trouble coordinating to stop global warming or unilateral nuclear disarmament would recognize this in time.
On the other hand, I'm slightly puzzled by why you think there's a huge unjustified leap between lack of value alignment and threat to the human race. Does most of your objection lie in 1) the lack of threat any given superintelligent AI would pose, because they're not going to be that much smarter than humans or 2) the lack of worry that they'll do anything too harmful to humans, because they'll do something relatively harmless, like trade with us or go into outer space?
For 1, I buy that it'd be a lot smarter than humans, because even if it initially starts out as humanlike, it can copy itself and be productive for longer periods of time (imagine what you could do if you didn't have to sleep, or could eat more to avoid sleeping). And we know that any "superintelligent" machine can be at least as efficient as the smartest humans alive. I would still not want to be in a war against a nation full of von Neumanns on weapons development, Muhammads on foreign policy and Napoleons on Military strategy.
For 2... I would need to hear specifics on how their morality would be close enough to ours to be harmless. But judging by your posts cross thread this doesn't seem to be the main point.
By the way, I must thank you for your measured tone and willingness to engage on this issue. You seem to be familiar with some of the arguments, and perhaps just give different weights to them than I do. I've seen many gut level dismissals and I'm very happy to see that you're laying out your reasoning process.
No comments yet.