(no title)
rickdeckard | 4 days ago
I don't think we're that far away from that. Just the decision of someone to put an AI in charge of critical infrastructure and defense, or a series of oversights allowing an external AI to take control of it.
Looking at the past year and all the unpredicted conclusions AI came to, self-awareness is probably not needed for an AI to consider humans as an obstacle to achieve some poorly-phrased goal.
The Paperclip maximizer theory [0] comes to mind...
lukan|4 days ago
rickdeckard|4 days ago
AI Agents already design, code, compile, control machines, spend/earn money (since last week).
We're quite on a trajectory that humans only need to set this up for an AI once
What do you think is still far away?