(no title)
hallqv | 2 years ago
Fact of the matter is that sota LLMs are highly accurate predictors for many topics, certainly above any living human in terms of total AUC of correct predictions on fact based questions. Some humans are better on certain topics, but noone can match total AUC since LLMs has such breadth.
intended|2 years ago
LLMs are fine - people are attributing superpowers to them when they discuss hallucinations.
LLMs do not "think". They created the correct text as they were modeled to do.
The observer feels that the facts are wrong.
That's an issue with the observer, not the model. The model was never trained for facts it was trained for text.
smeagull|2 years ago