(no title)
lasereyes136 | 1 year ago
I don't think we have the correct word for what LLMs do but lie and hallucinations are not really correct.
lasereyes136 | 1 year ago
I don't think we have the correct word for what LLMs do but lie and hallucinations are not really correct.
HarHarVeryFunny|1 year ago
LLMs create quite deep representations of the input on which they based their next word prediction (text continuation), and it has been proved that they already sometimes do know when something they are generating is low confidence or false, so maybe with appropriate training data they could better attend to this and predict "I don't know" or "I'm not sure".
To improve the ability of LLMs to answer like this requires them to have a better idea of what is true or not. Humans do this by remembering where they learnt something: was it first hand experience, or from a text book or trusted friend, or from a less trustworthy source. LLMs ability to discern the truth could be boosted by giving them the sources of their training data, maybe together with a trustworthiness rating (although they may be able to learn that for themselves).
Tagbert|1 year ago
How many people would agree that P.T. Barnum said “There’s a sucker born every minute”? That would be a hallucination.
The quote is from Adam Forepaugh.
unaindz|1 year ago
codewench|1 year ago
I believe 'bullshit' is accurate, as in "The chatbot didn't know the answer, so it started bullshitting".