(no title)
tshadley | 1 year ago
This article did not seem to make the mistake of associating hallucination with bad data so hard to see exactly how this is relevant. I mean, you could write an article "AI Error: how to reduce it" and frame it entirely in user's perceptions but I wouldn't make a peep.
My objection is that it is silly to use the word "hallucination" (which suggests insanity/psychosis) and then address it as if LLMs are marginally insane and the solution is straight-jacket-like heuristics, when "uncertainty" (which suggests uncertainty) is a far more accurate description of behavior pointing to a far more productive and focused solution.
No comments yet.