LLM's only make a "best guess" for each next token. That's it. When it's wrong we call it a "hallucination" but really the entire thing was a "hallucination" to begin with.
This is also analogous to humans - who also "hallucinate" incorrect answers, usually "hallucinate" incorrect answers less when they "Think through this step by step before giving your answer", etc.
thewataccount|2 years ago
LLM's only make a "best guess" for each next token. That's it. When it's wrong we call it a "hallucination" but really the entire thing was a "hallucination" to begin with.
This is also analogous to humans - who also "hallucinate" incorrect answers, usually "hallucinate" incorrect answers less when they "Think through this step by step before giving your answer", etc.
unknown|2 years ago
[deleted]