top | item 38288104

(no title)

hallqv | 2 years ago

What you are talking about I would call guessing :)

Fact of the matter is that sota LLMs are highly accurate predictors for many topics, certainly above any living human in terms of total AUC of correct predictions on fact based questions. Some humans are better on certain topics, but noone can match total AUC since LLMs has such breadth.

discuss

order

intended|2 years ago

Its accurate if a Human looks at it.

LLMs are fine - people are attributing superpowers to them when they discuss hallucinations.

LLMs do not "think". They created the correct text as they were modeled to do.

The observer feels that the facts are wrong.

That's an issue with the observer, not the model. The model was never trained for facts it was trained for text.

smeagull|2 years ago

You're talking about retrieval.