top | item 39749509

(no title)

oaktowner | 1 year ago

I can't stand it being called "hallucinating" because it anthropomorphizes the technology. This isn't a conciousness that is "seeing" things that don't exist: it's a word generator that is generating words that don't make sense (not in a syntactic sense, but in a semantic sense).

Calling it "hallucination" implies that there are (other) moments when it is understanding the world correctly -- and that itself is not true. At those moments, it is a word generator that is generating words that DO make sense.

At no point is this a conciousness, and anthropomorphizing it gives the impression that it is one.

discuss

order

JohnFen|1 year ago

This. It's not "hallucination", it's "error".

krapp|1 year ago

It isn't an error, either. It's doing exactly what it's intended to, exactly as it's intended to do it. The error is in the human assumption that the ability to construct syntactically coherent language signals self-awareness or sentience. That it should be capable of understanding the semantics correctly, because humans obviously can.

There really is no correct word to describe what's happening, because LLMs are effectively philosophical zombies. We have no metaphors for an entity that can appear to hold a coherent conversation, do useful work and respond to commands but not think. All we have is metaphors from human behavior which presume the connection between language and intellect, because that's all we know. Unfortunately we also have nearly a century of pop culture telling us "AI" is like Data from Star Trek, perfectly logical, superintelligent and always correct.

And "hallucination" is good enough. It gets the point across, that these things can't be trusted. "Confabulation" would be better, but fewer people know it, and it's more important to communicate the untrustworthy nature of LLMs to the masses than it is to be technically precise.