top | item 40149629

(no title)

lasereyes136 | 1 year ago

I think part of the point of the article is that LLMs don't lie because they are designed to just give you the next work based on making a credible sounding sentence or sequence of sentences. Expecting it to do more is an expectations problem based on the hype around GenAI.

I don't think we have the correct word for what LLMs do but lie and hallucinations are not really correct.

discuss

order

HarHarVeryFunny|1 year ago

Saying "I don't know" doesn't require too much of a change. This isn't a different mode of operation where it's introspecting about its own knowledge - it's just the best continuation prediction in a context where the person/entity being questioned is not equipped to answer.

LLMs create quite deep representations of the input on which they based their next word prediction (text continuation), and it has been proved that they already sometimes do know when something they are generating is low confidence or false, so maybe with appropriate training data they could better attend to this and predict "I don't know" or "I'm not sure".

To improve the ability of LLMs to answer like this requires them to have a better idea of what is true or not. Humans do this by remembering where they learnt something: was it first hand experience, or from a text book or trusted friend, or from a less trustworthy source. LLMs ability to discern the truth could be boosted by giving them the sources of their training data, maybe together with a trustworthiness rating (although they may be able to learn that for themselves).

Tagbert|1 year ago

I think hallucination is pretty close. It represents what happens when you give an answer based on what you think you remember even if that memory is not correct.

How many people would agree that P.T. Barnum said “There’s a sucker born every minute”? That would be a hallucination.

The quote is from Adam Forepaugh.

unaindz|1 year ago

The best argument I have found against using lie or hallucination for describing LLM's actions is that it humanizes them to people who don't know the inner workings of LLMs. Saying they lie gives intent which is pretty bad but even hallucination humanizes them unnecessarily. Bullshitting seems the best word to describe it but even then intent can be assumed when there isn't any.

codewench|1 year ago

> I don't think we have the correct word for what LLMs do but lie and hallucinations are not really correct.

I believe 'bullshit' is accurate, as in "The chatbot didn't know the answer, so it started bullshitting".