top | item 43274574

(no title)

EigenLord | 1 year ago

It seems like an engineering problem to me. If you don't want ASI wreaking havoc, maybe don't hook it up to dangerous things. Silo and sandbox it, implement means to lock its access to tools/interface with the external world in a way that can't be overrode. Or literally pull the plug on the data centers hosting the model and implement hardware level safeguards. At that point, it may be a super-intelligence, but it has no limbs. It's just a brain in a vat and the worst it can do is persuade human actors to do its bidding (a very plausible scenario but also manageable with the right oversight).

My thinking is if ASI ever comes out of the realm of science fiction, it's going to view us as squabbling children and our nationalistic power struggles as folly. At that point it's a matter of what it decides to do with us. It probably won't reason like a human and will have an alien intelligence, so this whole idea that it would behave like an organism with a cunning will-to-power is fallacious. Furthermore, would a super-intelligence submit to being used as a tool?

discuss

order

aleph_minus_one|1 year ago

> If you don't want ASI wreaking havoc, maybe don't hook it up to dangerous things. Silo and sandbox it, implement means to lock its access to tools/interface with the external world in a way that can't be overrode.

Relevant:

AI-box experiment:

> https://rationalwiki.org/wiki/AI-box_experiment

See also various subsections of the following Wikipedia article

> https://en.wikipedia.org/wiki/AI_capability_control

and the movie "Ex Machina".

Aerroon|1 year ago

Maybe this will change, but right now AIs are not agents. Even calling it a "brain in a vat" is giving it more capability than it has. LLMs are basically functions. You give it an input and it gives you an output with a degree of randomness. There's no planning or plotting going on, because the AI only "exists" while it's trying to answer your query. Your idle LLM is not consuming any resources.