Real hallucination is when you sense something that isn't physically there. LLMs don't sense anything, so this "hallucination" term is questionable from the get-go.
I think the point is the LLM arrives at an obvious and undeniable fact, a misconception common in human discourse, an assertion of something unknowable, a statement of what appears to be opinion, a "creative" response to a brief (both impressive and unimpressive), a reasonable "guess" and a random answer which is only very loosely linked to the prompt in essentially the same way. Humans generally arrive at such responses in different ways, and are often conscious when they're certain, reasonably confident, guessing, needimg an answer to be a particular way to fit their wider goal or bullshitting
So if it's "hallucinating" a probable continuation which asserts something which is [incidentally] understood to be completely wrong or not in the source material by humans, it's going through exactly the same process to arrive at a continuation which [incidentally] is understood to contain only accurate statements or valid summarizations
If I had a penny each time a human has confidently concluded something that is entirely incorrect… I’ve inadvertently done it countless times myself, and so has every person I know of.
> Suppose I make a massive book of predictions. Some of which turn out to be correct. Am I now capable of predicting the future?
If you write a book of random predictions without any insight the vast majority of them will be false, so even if few of them are right it is not impressive nor anyone would say you're capable of predicting the future.
In comparison, the OP states that GPT-4 predictions are 97% correct. And yes, I would say that is pretty impressive. If 97% of anything I say about the future was correct I would be considered a wizard and probably be a billionaire.
What you are talking about I would call guessing :)
Fact of the matter is that sota LLMs are highly accurate predictors for many topics, certainly above any living human in terms of total AUC of correct predictions on fact based questions. Some humans are better on certain topics, but noone can match total AUC since LLMs has such breadth.
resonious|2 years ago
notahacker|2 years ago
So if it's "hallucinating" a probable continuation which asserts something which is [incidentally] understood to be completely wrong or not in the source material by humans, it's going through exactly the same process to arrive at a continuation which [incidentally] is understood to contain only accurate statements or valid summarizations
hallqv|2 years ago
smeagull|2 years ago
Am I now capable of predicting the future?
Suppose I wrote the book to be as banal (i.e. highly probable) as possible.
Am I predicting the future now? And, how impressive is it?
M4v3R|2 years ago
If you write a book of random predictions without any insight the vast majority of them will be false, so even if few of them are right it is not impressive nor anyone would say you're capable of predicting the future.
In comparison, the OP states that GPT-4 predictions are 97% correct. And yes, I would say that is pretty impressive. If 97% of anything I say about the future was correct I would be considered a wizard and probably be a billionaire.
hallqv|2 years ago
Fact of the matter is that sota LLMs are highly accurate predictors for many topics, certainly above any living human in terms of total AUC of correct predictions on fact based questions. Some humans are better on certain topics, but noone can match total AUC since LLMs has such breadth.
TerrifiedMouse|2 years ago