top | item 45057794

(no title)

oo0shiny | 6 months ago

> My former colleague Rebecca Parsons, has been saying for a long time that hallucinations aren’t a bug of LLMs, they are a feature. Indeed they are the feature. All an LLM does is produce hallucinations, it’s just that we find some of them useful.

What a great way of framing it. I've been trying to explain this to people, but this is a succinct version of what I was stumbling to convey.

discuss

order

jstrieb|6 months ago

I have been explaining this to friends and family by comparing LLMs to actors. They deliver a performance in-character, and are only factual if it happens to make the performance better.

https://jstrieb.github.io/posts/llm-thespians/

red75prime|6 months ago

The analogy goes down the drain when a criterion for good performance is being objectively right. Like with Reinforcement Learning from Verifiable Rewards.

ngc248|6 months ago

A better analogy is of an overconfident 5 year old kid, who never says that they don't know the answer and always has an "answer" for everything.

aitchnyu|6 months ago

All models are wrong, some are merely useful - 1976/1933/earlier adage.

lagrange77|6 months ago

Right, all models are inherently wrong. It's up to the user know about its limits / uncertainty.

But i think this 'being wrong' is kind of confusing when talking about LLMs (in contrast to systems/scientific modelling). In what they model (language), the current LLMs are really good and acurate, except for example the occasional chinese character in the middle of a sentence.

But what we mean by LLMs 'being wrong' most of the time is being factually wrong in answering a question, that is expressed as language. That's a layer on top of what the model is designed to model.

EDITS:

So saying 'the model is wrong' when it's factually wrong above the language level isn't fair.

I guess this is essentially the same thought as 'all they do is hallucinate'.

pjmorris|6 months ago

Generally attributed to George Box

mohsen1|6 months ago

Intelligence in a way is the ability to filter out useless information. Be it, thoughts or sensory information

tugberkk|6 months ago

Yes, can't remember who said it but LLM's always hallucinate, it is just that they are 90 something percent right.

ljm|6 months ago

If I was to drop acid and hallucinate an alien invasion, and then suddenly a xenomorph runs loose around the city while I’m tripping balls, does being right in that one instance mean the rest of my reality is also a hallucination?

Because it seems the point being made multiple times that a perceptual error isn’t a key component of hallucinating, the whole thing is instead just a convincing illusion that could theoretically apply to all perception, not just the psychoactively augmented kind.

OtomotO|6 months ago

Which totally depends on your domain and subdomain.

E.g. Programming in JS or Python: good enough

Programming in Rust: I can scrap over 50% of the code because it will

a) not compile at all (I see this while the "AI" types)

b) not meet the requirements at all