top | item 45801230

(no title)

rewilder12 | 3 months ago

LLMs by definition do not make facts. You will never be able to eliminate hallucinations. It's practically impossible.

Big tech created a problem for themselves by allowing people to believe the things their products generate using LLMs are facts.

We are only reaching the obvious conclusion of where this leads.

discuss

order

kentm|3 months ago

A talk I went to made the point that LLMs don't sometimes hallucinate. They always hallucinate -- its what they're made to do. Usually those hallucinations align with reality in some way, but sometimes they don't.

I always thought that was a correct and useful observation.

hnuser123456|3 months ago

To be sure, a lot of this can be blamed on using AI studio to ask a small model a factual question. It's the raw LLM output of a highly compressed model, it's not meant to be everyday user facing like the default Gemini models, and doesn't have the same web search and fact checking behind the scenes.

On the other hand, training a small model to hallucinate less would be a significant development. Perhaps with post-training fine-tuning, after getting a sense of what depth of factual knowledge the model has actually absorbed, adding a chunk of training samples with a question that goes beyond the model's fact knowledge limitations, and the model responding "Sorry, I'm a small language model and that question is out of my depth." I know we all hate refusals but surely there's room to improve them.

th0ma5|3 months ago

All of these techniques just push the problems around so far. And anything short of 100% accurate is a 100% failure in any single problematic instance.