It seems like an engineering problem to me. If you don't want ASI wreaking havoc, maybe don't hook it up to dangerous things. Silo and sandbox it, implement means to lock its access to tools/interface with the external world in a way that can't be overrode. Or literally pull the plug on the data centers hosting the model and implement hardware level safeguards. At that point, it may be a super-intelligence, but it has no limbs. It's just a brain in a vat and the worst it can do is persuade human actors to do its bidding (a very plausible scenario but also manageable with the right oversight).My thinking is if ASI ever comes out of the realm of science fiction, it's going to view us as squabbling children and our nationalistic power struggles as folly. At that point it's a matter of what it decides to do with us. It probably won't reason like a human and will have an alien intelligence, so this whole idea that it would behave like an organism with a cunning will-to-power is fallacious. Furthermore, would a super-intelligence submit to being used as a tool?
aleph_minus_one|1 year ago
Relevant:
AI-box experiment:
> https://rationalwiki.org/wiki/AI-box_experiment
See also various subsections of the following Wikipedia article
> https://en.wikipedia.org/wiki/AI_capability_control
and the movie "Ex Machina".
Aerroon|1 year ago
unknown|1 year ago
[deleted]