top | item 35739082

(no title)

tinsmith | 2 years ago

I mean, humans do that. We are remarkably contradictory when expressing ourselves, generally speaking, often without realizing it because we'll change our thinking in the moment to fit the current narrative or circumstance. LLMs just put that on blast.

discuss

order

quickthrower2|2 years ago

The reason humans do this and LLMs is very different. Humans do it as a social skill / tribe fitting behaviour. Agreeableness. (watch out for that!)

naasking|2 years ago

> The reason humans do this and LLMs is very different. Humans do it as a social skill / tribe fitting behaviour.

Tribe fitting doesn't sound as far off from minimizing loss functions as you imply.

MVissers|2 years ago

Humans do this unconsciously as well. You have some extreme medical conditions like confabulation where people just make up the strangest things.

Split-brain personality people can make up stories when you prompted the other brain, and the current half has to explain why it did something.

I'm baffled also that reverse psychology even works on LLMs, to bypass some of its safeties. I mean.. We're using psychological tricks that work on toddlers and also work on these models.

I'm an amateur neuroscientist as you can see, but find LLMs fascinating.

vidarh|2 years ago

Is it? To my knowledge we don't have reliable data on why humans do this. To me it appears as if we spend a significant amount of our time retroactively making up justifications for things even to ourselves for things there's little reason to think we've done based on a conscious decision making at all.

civilized|2 years ago

Humans can be consistent if we try. LLMs can't, even when prompted to be consistent, because they don't really understand what it means to be consistent.

vidarh|2 years ago

Some humans can be consistent if we try some of the time.

Having done phone support in early parts of my career, I'd strongly dispute any notion that most humans can be consistent if we try for anything more than the shortest periods and following the very simplest of instructions.

Most people are really awful at maintaining the level of focus needed to be consistent, and it's one of the reasons we spend so much time drilling people on specific behaviours until they're it's near automatic instead of e.g. teaching people the rules of arithmetic, or driving, or any other skills and expecting people to be able to consistently follow the rules they've learnt. And most of us still keep making mistakes while doing things we've practised over and over and over.

LLMs are still bad at being consistent, sure, but I've seen nothing to suggest that is anything inherent.

I think one of the biggest issues with LLMs if anything is that they've gotten too good at expressing themselves well, so we overestimate the reasoning levels we should expect from them in other areas. E.g. we're not used to an eloquent answer from someone unable to maintain coherent focus and step by step reasoning because human children don't learn to speak like this before we're also able to reason fairly well, and that makes it confusing to deal with LLMs where relative stage of development of different skills does not match what we expect.

awestroke|2 years ago

> Humans can be consistent if we try.

That's a bold statement. Do you have evidence for this?

bmacho|2 years ago

No, you can't, in any meaningful sense. Even the biggest bigots, with the most stable beliefs, like RMS or the Pope have a lot of contradictory beliefs. (At least, I think so. I don't have any evidence for this.)

Also any strategy that you might came up and is simple enough for you to follow is trivially followable for chatGPT as well.

smusamashah|2 years ago

Humans are not mere LLMs. But it's nice to think we are. May be we can program other humans with carefully constructed prompts alone. May be hypnosis is just a prompt injection attack.

chongli|2 years ago

May be we can program other humans with carefully constructed prompts alone

Isn’t that what scammers, catfishers, con artists, and marketers do?

PartiallyTyped|2 years ago

> May be we can program other humans with carefully constructed prompts alone.

Is that not what reasoning is in a debate?

> May be hypnosis is just a prompt injection attack.

No need to go into hypnosis. Mere prompts can inject false memories. This has been proven multiple times [1].

[1] The Brain: The Story of You, David Eagleman.

beardedetim|2 years ago

The pseudo-science of NLP (_neuro-lingustic programming_) or "conversational hypnosis" is based on the believe that you _can_ program people with carefully constructed prompts alone. Not saying I _believe_ NLP to be valid (_hence the pseudo-science callout_) but I did fall down the Milton H Erickson wormhole 10-20yrs ago so can't say I never fell for it myself.

quickthrower2|2 years ago

I do feel like a stage hypnotist when prompt engineering an LLM!

eurekin|2 years ago

... or a cable TV

bheadmaster|2 years ago

> We are remarkably contradictory when expressing ourselves, generally speaking, often without realizing it because we'll change our thinking in the moment to fit the current narrative or circumstance. LLMs just put that on blast.

Exactly.

It seems to me that today's LLMs are on the level of human children. Children often say random bullshit. They try to figure out things, and succeed to some level, but fail spectacularly above that, regressing to whatever connection they can sense. It's only that human children most often learn to fear mistakes, so they stop expressing themselves so freely.

LLMs seem like human children, but without the fearing mistakes part - they just spew whatever comes to their mind, without filter (except those """ethical""" filters built in by OpenAI).

pydry|2 years ago

There's a retrospective mechanism working within human intelligence that is very evidently not in play with LLMs.

Humans are capable of creating cargo cults but it seems LLMs are destined for it.