Hallucinations are false current sense perceptions. LLMs don’t have senses and don’t hallucinate at all; the LLM errors described as “hallucinations” are closer, if one needs an anthropomorphizing metaphor, to confabulations.
Which kind of make sense; as LLMs have almost no memory, just an instinct to respond and some instinctual responses (the result of “training”, which is also a bad metaphor; only “in-context learning” is analogous to training/learning for humans, what is called “training” is guided evolution of frozen instincts) and whatever is in their context window. And lack of memory plus a prompt to respond is a major context where confabulations happen with humans (these are specifically called “provoked confabulations.”)
_nalply|2 years ago
dragonwriter|2 years ago
dragonwriter|2 years ago
Which kind of make sense; as LLMs have almost no memory, just an instinct to respond and some instinctual responses (the result of “training”, which is also a bad metaphor; only “in-context learning” is analogous to training/learning for humans, what is called “training” is guided evolution of frozen instincts) and whatever is in their context window. And lack of memory plus a prompt to respond is a major context where confabulations happen with humans (these are specifically called “provoked confabulations.”)