top | item 47127246

(no title)

happytoexplain | 6 days ago

The point is that LLMs are easily led by questions and confused by implied premises in ways that humans are not (not that a human will know the answer better, but that a human doesn't "trick" the question-asker in this way). But people asking questions unintentionally use incorrect premises or leading wording all the time. That's why LLMs are inappropriate for domains with a large knowledge gap (a programmer asking about a programming language is a small gap - millions of people asking about nutrition will contain a lot of large gaps). The question asker can't be relied upon to "know what they don't know" and use their own heuristics for deciding how right or wrong the LLM might be (virtually everybody lacks these heuristics - we are much better at modeling humans in our minds when interpreting their communications).

Further, if the information is important (nutrition) and you add liability to the mix (safety and health), you're multiplying how inappropriate it is to use LLMs for the job.

discuss

order

zahlman|6 days ago

> That's why LLMs are inappropriate for domains with a large knowledge gap (a programmer asking about a programming language is a small gap - millions of people asking about nutrition will contain a lot of large gaps). The question asker can't be relied upon to "know what they don't know" and use their own heuristics for deciding how right or wrong the LLM might be.

Okay, but the question asked was objectively nothing to do with nutrition whatsoever.

happytoexplain|6 days ago

The specific (usually humorous) questions-and-answers that make headlines are a distraction. I am not making an attack on LLMs, so a defense is moot. I'm describing an intrinsic quality of (current) LLMs.