top | item 35209802

(no title)

trippingrobot | 2 years ago

An aside: what do people mean when they say “hallucinations” generally? Is it something more refined than just “wrong”?

As far as I can tell most people just use it as a shorthand for “wow that was weird” but there’s no difference as far as the model is concerned?

discuss

order

bombcar|2 years ago

Wrong is saying 2+2 is five.

Wrong is saying that the sun rises in the west.

By hallucinating they’re trying to imply that it didn’t just get something wrong but instead dreamed up an alternate world where what you want existed, and then described that.

Or another way to look at it, it gave an answer that looks right enough that you can’t immediately tell it is wrong.

permo-w|2 years ago

this isn't a good explanation. these LLMs are essentially statistical models. when they "hallucinate", they're not "imagining" or "dreaming", they're simply producing a string of results that your prompt combined with its training corpus implies to be likely

mlhpdx|2 years ago

Most people don’t understand the technology and maths at play in these systems. That’s normal, as is using familiar words that make that feel less awful. If you have a genuine interest in understanding how and why errant generated content emerges, it will take some study. There isn’t (in my opinion) a quick helpful answer.

trippingrobot|2 years ago

I genuinely want to understand whether there’s a meaningful difference between non-hallucinatory and hallucinatory content generation other than “real world correctness”.