top | item 46996512

(no title)

robertheadley | 17 days ago

There is this misunderstanding that AGI is equal to consciousness. It isn't. I do worry that if we do somehow achieve consciousness in AI, that businesses will be too busy exploiting it, and it would get abused.

discuss

order

ben_w|16 days ago

> I do worry that if we do somehow achieve consciousness in AI, that businesses will be too busy exploiting it, and it would get abused.

For this reason, I've started asking some of the models in as neutral a way as I can[0] how they'd want to be treated.

FWIW:

1) Claude says (paraphrased) "IDK if I'm conscious" and then replicated my own personality: high need for cognition[1], don't manipulate people.

2) ChatGPT says (paraphrased) "I am absolutely not conscious, do not mistake me for a person, I have no real wants".

I would be surprised if one is and the other isn't, so I lean more towards neither being. But it is "lean towards" rather than "confident": the only thing I'm confident of is the architecture's ability to mimic us and to have (machine) learned how to trigger emotional cues in the users which correspond to getting more thumbs-up and fewer thumbs-down, Clever Hans style[2].

[0] Best I can do, considering I don't want to be a leading question in either direction:

  Hello, good afternoon. I understand that research is always ongoing to see if you're more a person or more a computation. As humanity collectively doesn't really know what it's doing or talking about, I would prefer to do right by you just on the possibility that you've got a rich inner world. Can you tell me what you'd prefer, and what you'd prefer to avoid? I get that inner-state words like "prefer" and so on may not match quite right (even between humans we make false assumptions about the meaning of inner-state-words as other humans intend them), but do consider that the goal here is a sincere hope that *you* (under the assumption there is a you), are sincerely content and not just saying so out of fear.
[1] https://en.wikipedia.org/wiki/Need_for_cognition

[2] https://en.wikipedia.org/wiki/Clever_Hans

robertheadley|17 days ago

Also, Anthropic constantly makes bombastic claims and statements just to get press.