How did you jump to that conclusion? The agent will be limited by the capabilities under its control. We have the technological ability to cripple world now and we don't have the technological means to prevent it. Give one AI control of the whole US arsenal and the objective of ending the world. Give another AI the capabilities of the rest of the world and the objective of protecting it. Would you feel safe?
ASalazarMX|1 year ago
Humans have prevented it many times, but not specifically by technological ability. If Putin/Trump/Xi Ping wanted a global nuclear war, they'd better have the means to launch the nukes themselves in secret because the chain of command will challenge them.
If an out-of-control AI could discover a circuitous way to access nukes, an antagonist AI of equal capabilities should be able to figure it out too, and warn the humans in the loop.
I agree that AI development should be made responsibly, but not all people do, and it's impossible to put the cat back in the bag. The limiting factor these days is hardware, as a true AGI will likely need even more of it than our current LLMs.
grayfaced|1 year ago