top | item 34778418

(no title)

manimino | 3 years ago

All these AIs are doing is sampling from some continuous space. Every answer is a "hallucination".

AIs are useful because many (most?) of those hallucinations happen to be useful.

discuss

order

karmakaze|3 years ago

Even the human vision system is hallucinating all the time. The difference is that when we're awake we have another filter that separates the unreal hallucinations from the ones that are much more likely to match reality.

I experienced this very memorably coming back from vacation where there were little lizards seen while walking to and from the beach. When I got back home to my cold climate, I 100% saw a lizard skit across the sidewalk--it was a leaf.

NikolaNovak|3 years ago

That's using "hallucination" the way humans use it.

My understanding was that the term as used in the field is fairly well defined as "producing a confident answer that is not backed up / justified by training data". It has nothing to do with sentience or humanness.

manimino|3 years ago

An AI model that can only repeat its training data would be no better than normal text search.

AI models are useful precisely because they can interpolate - make guesses that incorporate information from many examples.

And this interpolation is called "hallucination" only when it happens to be a wrong answer?

aliqot|3 years ago

It's not a hallucination. When a parrot speaks it isn't sentient. LLM's are essentially doing parrot math with much larger data sets, that is it. The fact that it sufficiently mimics what you're expecting is not a testament to its intellect.

inkcapmushroom|3 years ago

>When a parrot speaks it isn't sentient

Parrots are certainly sentient. You could maybe say parrots aren't sapient, but frankly I think we don't know enough about sapience or what it takes to make human-level intelligence to make that claim either.