top | item 47163701

(no title)

rickdeckard | 4 days ago

If I remember correctly, the original Terminator story is that Skynet was put in charge of operating a vast amount of infrastructure, became self-aware and deemed humans as a threat to its goals. It then launched a nuclear strike against them and ordered a machine army to eradicate the remaining ones.

I don't think we're that far away from that. Just the decision of someone to put an AI in charge of critical infrastructure and defense, or a series of oversights allowing an external AI to take control of it.

Looking at the past year and all the unpredicted conclusions AI came to, self-awareness is probably not needed for an AI to consider humans as an obstacle to achieve some poorly-phrased goal.

The Paperclip maximizer theory [0] comes to mind...

[0] https://aicorespot.io/the-paperclip-maximiser/

discuss

order

lukan|4 days ago

Oh for sure, if given AI access to critical infrastructure, lots of bad things can happen. But a self aware AI is still far away, just as a AI that can build things on its own without human intervention.

rickdeckard|4 days ago

I don't think an AI that can build things on its own without human intervention is that far away.

AI Agents already design, code, compile, control machines, spend/earn money (since last week).

We're quite on a trajectory that humans only need to set this up for an AI once

What do you think is still far away?