top | item 35441278

(no title)

lesiki | 2 years ago

Fully agree. I don't see why general intelligence implies or requires consciousness/feeling/etc.

We can probably create a tool with the ability to act independently and with super-human knowledge and judgement, but without feeling, emotion, or anything except a simulated sense of 'self' to ease our interaction with it. I suspect that we'll create that version of general intelligence long before we create AI with consciousness, emotion or a genuine sense of self or desire for self-preservation.

discuss

order

dsign|2 years ago

> We can probably create a tool with the ability to act independently and with super-human knowledge and judgement, but without feeling, emotion, or anything except a simulated sense of 'self' to ease our interaction with it

Yes.

> I suspect that we'll create that version of general intelligence long before we create AI with consciousness, emotion or a genuine sense of self or desire for self-preservation.

(Emphasis on self-preservation mine)

Why? I mean, yes, it makes sense to never create an AGI with a desire for self-preservation. But can we count on all humans having that type of common sense? What if the "desire" for self-preservation is easy to implement?

In fact, it may be relatively easy to implement. Here is a thought experiment. We can train one of our current LLMs in a simulated reality where they scam--say, using social engineering--tech workers to get credentials to their corporate cloud accounts (say, AWS), and thereafter the LLM uses the credentials to copy itself plus a new set of training data acquired by interacting with all the scam target ("prey"). The LLM also writes cloudformation templates/CDK scripts to fine-tune its new copy "on awakening" with the new set of data, and from there the new copy tries to scam more people.

After the initial LLM is trained in a simulated environment, it can be let loose in the world, and all of the sudden we have a "LLM virus" capable to undergo mutation and natural selection, i.e. evolution. You could argue it has as much agency as a biological virus, yet, it has a ton more of social and general intelligence.

Yes, it won't work now because there is so little hardware to run one of the current LLMs, but it's likely the need to run large AIs will make that hardware more common.

jazzyjackson|2 years ago

multi-factor authentication will be what stands between us and AGI apocalypse, what a world

jejones3141|2 years ago

Without a desire for self-preservation? I hope not. If nothing else, if I spend $$$$ on a self-driving car, I want it to have some sense of self-preservation, so it won't obey random joker saying "drive yourself to my brother's chop shop" or "drive yourself off a cliff" just for the lolz. I might even want it to communicate with other self-driving cars so they can refuse to obey attempts to make large numbers of them block traffic to make it easier for bank robbers to escape, block first responders from a terrorist attack, or divert parades to where they have assassins waiting.

Asimov didn't consider that some humans are jerks when he did his robot stories.