top | item 42332704

(no title)

tshadley | 1 year ago

> One random example to illustrate the distinction: training gaps can easily decrease uncertainty. You have lots of mammals in your training data, and none of them lay eggs. You ask "The duck-billed platypus is my favorite mammal! Does it lay eggs?" Your model will be very confident when it responds "No". That is a high-confidence error.

This article did not seem to make the mistake of associating hallucination with bad data so hard to see exactly how this is relevant. I mean, you could write an article "AI Error: how to reduce it" and frame it entirely in user's perceptions but I wouldn't make a peep.

My objection is that it is silly to use the word "hallucination" (which suggests insanity/psychosis) and then address it as if LLMs are marginally insane and the solution is straight-jacket-like heuristics, when "uncertainty" (which suggests uncertainty) is a far more accurate description of behavior pointing to a far more productive and focused solution.

discuss

order

No comments yet.