Well, we are always using our internal models for work like this. When our internal models and what we say don't match reality, we might call it "hallucination" or "getting it wrong" or even "lying".
When we get it right — our internal model matches reality, and we also say something that matches reality — we call it "truthfulness", and it's super valuable.
But I think there are two really different sources of hallucination / inaccuracy / lies. One is that the internal model is wrong — "Oops, I was wrong. The CVS isn't on Main St." The other is when we decide to deceive. "Haha, I sent you to Main St. CVS is really on Center Rd." Two very different internal processes with the same outcome.
If we were only engaging in model-based inference, then we, too, would always be hallucinating. But the very thing you're pointing out -- that we act differently when our internal model is wrong vs. when it is right -- is the crux of the difference. We use models, but then we have the ability to immediately test the output of those models for correctness, because we have semantic, not just syntactic, awareness of both the input and output data. We have criteria for determining the accuracy of what our model is producing.
LLMs don't, and are only capable of engaging in stochastic inference from the pre-defined model, which solely represents syntactic patterns, and have no ability to determine whether anything they output is semantically correct.
staticman2|1 year ago
neolefty|1 year ago
When we get it right — our internal model matches reality, and we also say something that matches reality — we call it "truthfulness", and it's super valuable.
But I think there are two really different sources of hallucination / inaccuracy / lies. One is that the internal model is wrong — "Oops, I was wrong. The CVS isn't on Main St." The other is when we decide to deceive. "Haha, I sent you to Main St. CVS is really on Center Rd." Two very different internal processes with the same outcome.
Gormo|1 year ago
If we were only engaging in model-based inference, then we, too, would always be hallucinating. But the very thing you're pointing out -- that we act differently when our internal model is wrong vs. when it is right -- is the crux of the difference. We use models, but then we have the ability to immediately test the output of those models for correctness, because we have semantic, not just syntactic, awareness of both the input and output data. We have criteria for determining the accuracy of what our model is producing.
LLMs don't, and are only capable of engaging in stochastic inference from the pre-defined model, which solely represents syntactic patterns, and have no ability to determine whether anything they output is semantically correct.
raydev|1 year ago
yoyohello13|1 year ago