top | item 8748166

(no title)

rrockstar | 11 years ago

We cannot say that AI's will not be dangerous just because they will not have human-like behaviour. We shouldn't let inaccurate sci-fi movies cloud our judgement like that. Just because the future will most certainly not look like Terminator does not mean AI's are not a danger. The Elon Musk comment about the demon was not anthropomorphizing AI's; he was saying the invention of superintelligent general AI's could be like summoning the demon and trying to contain it in the sense that it could be a point of no return. We may find out after the fact that we made a huge mistake and cannot control it. I think the remarks of Elon Musk and Stephen Hawkings sound very alarmist, but they want to make sure that we will think about the implications of superintelligent AIs now, and not after we invent them. I am sure we can think of proper ways to 'contain the demon', but we need to think about that now and not when they are already 'set free'.

discuss

order

mindcrime|11 years ago

We cannot say that AI's will not be dangerous just because they will not have human-like behaviour.

I agree, and I'm not making a blanket statement like that. But I think that a lot of the recent hyperbole about AI seems to assume an AGI with "bad intention" or dangerous behavior rooted in some seemingly anthrocentric notion of goals, desires, etc. To the extent that people are saying any of those things, I think they're barking up the tree, since AI - no matter how intelligent - still isn't human.

Now, could an AGI still be dangerous, whether intentionally or by accident? Yes, and I'd guess the "by accident" bit is more likely. I don't think this is an issue that should be ignored, and I don't expect that it will be. But whipping up public hysteria over "unleashing the demon" and "AI could end the world" strikes me as overly reactionary. But that's just me.