Hallucinations are fundamentally a confusing and useless term that make this discussion a mess.
Firstly - Humans Hallucinate. We have the ability to be impaired to the point that we incorrectly perceive base reality.
Secondly - LLMs are always ‘Hallucinating’ Objective reality for an LLM is relations between tokens. It gets the syntax absolutely right, but we have an issue with the semantics being wrong.
This is simply not what the model is specced to do. It’s fortunately trained on many conversations that flow logically into each other.
It is NOT trained to actually apply logic. If you trained an LLM on absolutely illogical text, it too would create illogical tokens - with mathematical precision.
I think this is a really interesting equivalence to consider.
I think the difference is highlighted when someone is able to answer a question with "I don't know" or "I can't tell you that".
Interestingly, LLMs can be trained to answer that way in some circumstances. But if you're cunning enough, you can trick them into forgetting they don't know something.
While a human may hallucinate memories, we crucially have the ability to know when we're relying on a memory. And to think of ways we could verify our memory, or acknowledge when that's not possible.
>when someone is able to answer a question with "I don't know" or "I can't tell you that".
Maybe LLMs are narcissists, because there are people that have problems with it, though we'd consider them to have a disorder.
>we crucially have the ability to know when we're relying on a memory.
When it comes to eyewitness testimony, I'd counter that we aren't nearly as good at that as we give ourselves credit for. Remembering a memory changes the memory in our wetware.
In fact, I would say most of human development took eons until we started writing stuff down or documenting in a manner so we had a hard record of what the 'truth' was which then eventually turned into the scientific process of repeatability and verification.
But humans don't need to remember everything. You probably remember a few core details and then build up based on that and some logical reasoning at least subconsciously.
intended|1 year ago
Firstly - Humans Hallucinate. We have the ability to be impaired to the point that we incorrectly perceive base reality.
Secondly - LLMs are always ‘Hallucinating’ Objective reality for an LLM is relations between tokens. It gets the syntax absolutely right, but we have an issue with the semantics being wrong.
This is simply not what the model is specced to do. It’s fortunately trained on many conversations that flow logically into each other.
It is NOT trained to actually apply logic. If you trained an LLM on absolutely illogical text, it too would create illogical tokens - with mathematical precision.
crabmusket|1 year ago
I think the difference is highlighted when someone is able to answer a question with "I don't know" or "I can't tell you that".
Interestingly, LLMs can be trained to answer that way in some circumstances. But if you're cunning enough, you can trick them into forgetting they don't know something.
While a human may hallucinate memories, we crucially have the ability to know when we're relying on a memory. And to think of ways we could verify our memory, or acknowledge when that's not possible.
pixl97|1 year ago
Maybe LLMs are narcissists, because there are people that have problems with it, though we'd consider them to have a disorder.
>we crucially have the ability to know when we're relying on a memory.
When it comes to eyewitness testimony, I'd counter that we aren't nearly as good at that as we give ourselves credit for. Remembering a memory changes the memory in our wetware.
In fact, I would say most of human development took eons until we started writing stuff down or documenting in a manner so we had a hard record of what the 'truth' was which then eventually turned into the scientific process of repeatability and verification.
packetlost|1 year ago
gus_massa|1 year ago
llamaimperative|1 year ago