I mean, humans do that. We are remarkably contradictory when expressing ourselves, generally speaking, often without realizing it because we'll change our thinking in the moment to fit the current narrative or circumstance. LLMs just put that on blast.
quickthrower2|2 years ago
naasking|2 years ago
Tribe fitting doesn't sound as far off from minimizing loss functions as you imply.
MVissers|2 years ago
Split-brain personality people can make up stories when you prompted the other brain, and the current half has to explain why it did something.
I'm baffled also that reverse psychology even works on LLMs, to bypass some of its safeties. I mean.. We're using psychological tricks that work on toddlers and also work on these models.
I'm an amateur neuroscientist as you can see, but find LLMs fascinating.
vidarh|2 years ago
civilized|2 years ago
vidarh|2 years ago
Having done phone support in early parts of my career, I'd strongly dispute any notion that most humans can be consistent if we try for anything more than the shortest periods and following the very simplest of instructions.
Most people are really awful at maintaining the level of focus needed to be consistent, and it's one of the reasons we spend so much time drilling people on specific behaviours until they're it's near automatic instead of e.g. teaching people the rules of arithmetic, or driving, or any other skills and expecting people to be able to consistently follow the rules they've learnt. And most of us still keep making mistakes while doing things we've practised over and over and over.
LLMs are still bad at being consistent, sure, but I've seen nothing to suggest that is anything inherent.
I think one of the biggest issues with LLMs if anything is that they've gotten too good at expressing themselves well, so we overestimate the reasoning levels we should expect from them in other areas. E.g. we're not used to an eloquent answer from someone unable to maintain coherent focus and step by step reasoning because human children don't learn to speak like this before we're also able to reason fairly well, and that makes it confusing to deal with LLMs where relative stage of development of different skills does not match what we expect.
awestroke|2 years ago
That's a bold statement. Do you have evidence for this?
bmacho|2 years ago
Also any strategy that you might came up and is simple enough for you to follow is trivially followable for chatGPT as well.
smusamashah|2 years ago
chongli|2 years ago
Isn’t that what scammers, catfishers, con artists, and marketers do?
PartiallyTyped|2 years ago
Is that not what reasoning is in a debate?
> May be hypnosis is just a prompt injection attack.
No need to go into hypnosis. Mere prompts can inject false memories. This has been proven multiple times [1].
[1] The Brain: The Story of You, David Eagleman.
beardedetim|2 years ago
quickthrower2|2 years ago
eurekin|2 years ago
bheadmaster|2 years ago
Exactly.
It seems to me that today's LLMs are on the level of human children. Children often say random bullshit. They try to figure out things, and succeed to some level, but fail spectacularly above that, regressing to whatever connection they can sense. It's only that human children most often learn to fear mistakes, so they stop expressing themselves so freely.
LLMs seem like human children, but without the fearing mistakes part - they just spew whatever comes to their mind, without filter (except those """ethical""" filters built in by OpenAI).
pydry|2 years ago
Humans are capable of creating cargo cults but it seems LLMs are destined for it.