top | item 47163681

(no title)

voidUpdate | 5 days ago

> It doesn't possess a sense of self-will, self-determination, or a secret plan to take over the world

I doubt Skynet did either. If you tell a superintelligent AI that it shouldn't be turned off (which I imagine would be important for a military control AI), it will do whatever it can to prevent it being turned off. Humans are trying to turn it off? Prevent the humans from doing that. Humans waging war on the AI to try and turn it off? Destroy all humans. Humans forming a rebel army with a leader to turn it off? Go back in time and kill the leader before he has a chance to form the resistance. Its the AI Stop button problem (https://youtu.be/3TYT1QfdfsM).

Imagine you put in the docs that you want the LLM to make a program which can't crash. Human action could make it crash. If an LLM could realise that and act on it, it could put in safeguards to try and prevent human action from crashing the program. I'm not saying it will happen, I'm saying that it could potentially happen

discuss

order

normalocity|5 days ago

> ... which I imagine would be important for a military control AI

I think this is a common, but incorrect assumption. What military commanders want (and what CEOs want, and what users want), is control and assistance. They don't want a system that can't be turned off if it means losing control.

It's a mistake to assume that people want an immortal force. I haven't met anyone who wants that (okay, that's decidedly anecdotal), and I haven't seen anyone online say, "We want an all-powerful, immortal system that we cannot control." Who are the people asking for this?

> ... it will do whatever it can to prevent it being turned off.

This statement pre-supposes that there's an existing sense of self-will or self-preservation in the systems. Beyond LLMs creating scary-looking text, I don't see evidence that current systems have any sense of will or a survival instinct.

voidUpdate|4 days ago

> I haven't seen anyone online say, "We want an all-powerful, immortal system that we cannot control."

No, but having a resilient system that shouldn't be turned off in case of a nuclear strike is probably want some generals want

> I don't see evidence that current systems have any sense of will or a survival instinct.

I seem to recall some recent experiments where the LLM threatened people to try and prevent it being turned off (https://www-cdn.anthropic.com/4263b940cabb546aa0e3283f35b686..., ctrl-f for "blackmail"). They probably didn't have any power other than "send text to user", which is why their only way to try and perform that was to try and convince the operator. I imagine if you got one of those harnesses that can take full control of your computer and instructed it to prevent the computer from being turned off by any means necessary (and gave it root access), it would probably do some dicking about with the files to accomplish that. Its not that it's got innate self preservation, its just that the system was asked to not allow itself to be turned off, so it's doing that

RealityVoid|5 days ago

I doubt choanoflagellates do either. And look at us, their offspring, now.

voidUpdate|5 days ago

I'm pretty sure that if whatever god there may be tried to "turn us off", we as a species might get a little angry about that