(no title)
samuellevy | 2 years ago
"Hallucination" is a term that works well for actual intelligence - when you "know" something that isn't true, and has no path of reasoning, you might have hallucinated the base "knowledge".
But that doesn't really work for LLMs, because there's no knowledge at all. All they're doing is picking the next most likely token based on the probabilities. If you interrogate something that the training data covers thoroughly, you'll get something that is "correct", and that's to be expected because there's a lot of probabilities pointing to the "next token" being the right one... but as you get to the edge of the training data, the "next token" is less likely to be correct.
As a thought experiment, imagine that you're given a book with every possible or likely sequence of coloured circles, triangles, and squares. None of them have meaning to you, they're just colours and shapes that are in random seeming sequences, but there's a frequency to them. "Red circle, blue square, gren triangle" is a much more common sequence than "red circle, blue square, black triangle", so if someone hands you a piece of paper with "red circle, blue square", you can reasonably guess that what they want back is a green triangle.
Expand the model a bit more, and you notice that "rc bs gt" is pretty common, but if there's a yellow square a few symbols before with anything in between, then the triangle is usually black. Thus the response to the sequence "red circle, blue square" is usually "green triangle", but "black circle, yellow square, grey circle, red circle, blue square" is modified by the yellow square, and the response is "black triangle"... but you still don't know what any of these things _mean_.
When you get to a sequence that isn't covered directly by the training data, you just follow the process with the information that you _do_ have. You get "red triangle, blue square" and while you've not encountered that sequence before, "green" _usually_ comes after "red, blue", and "circle" is _usually_ grouped with "triangle, square", so a reasonable response is "green circle"... but we don't know, we're just guessing based on what we've seen.
That's the thing... the process is exactly the same whether the sequence has been seen before or not. You're not _hallucinating_ the green circle, you're just picking based on probabilities. LLMs are doing effectively this, but at massive scale with an unthinkably large dataset as training data. Because there's so much data of _humans talking to other humans_, ChatGPT has a lot of probabilities that make human-sounding responses...
It's not an easy concept to get across, but there's a fundamental difference between "knowing a thing and being able to discuss it" and "picking the next token based on the probabilities gleaned from inspecting terabytes of text, without understanding what any single token means"
hackinthebochs|2 years ago
hashhar|2 years ago
But yes, it's unfortunate that when the next tokens are joined token and laid out in the form of a sentence it appears "intelligent" to people. However if you instead lay out the individual probabilities of each token instead then it'll be more obvious what ChatGPT/LLMs actually do.
hackinthebochs|2 years ago
samuellevy|2 years ago
unknown|2 years ago
[deleted]
hnfong|2 years ago
How do you know? And more importantly, how do you prove it to others? The only way to prove it is to say: "OK, you are human, I am human, each of us know this is true for ourselves, let's be nice and assume it's true for each other as well".
> But that doesn't really work for LLMs, because there's no knowledge at all.
How do you know? I know your argument saying that the LLM "is just" guessing probabilities, but surely, if the LLM can complete the sentence "The Harry Potter book series was written by ", the knowledge is encoded in its sea of parameters and probabilities, right?
Asserting that it does not know things is pretty absurd. You're conflating "knowledge" with the "feeling" of knowing things, or the ability to introspect one's knowledge and thoughts.
> As a thought experiment, imagine that you're given a book with every possible or likely sequence of coloured circles, triangles, and squares.
I'd argue thought experiments are pretty useless here. The smaller models are quantitatively different from the larger models, at least from a functional perspective. GPT with hundreds of parameters may be very similar to the one you're describing in your thought experiment, but it's well known that GPT models with billions of parameters have emergent properties that make them exhibit much more human-like behavior.
Does your thought experiment scale to hundreds of thousands of tokens, and billions of parameters?
Also, as with the Chinese Room argument, the problem is that you're asserting the computer, the GPU, the bare metal does not understand anything. Just like how our brain cells don't understand anything either. It's _humans_ that are intelligent, it's _humans_ that feel and know things. Your thought experiment would have the human _emulate_ the bare metal layer, but nobody said that layer was intelligent in the first place. Intelligence is a property of the _whole system_ (whether humans or GPT), and apparently once you get enough "neurons" the behavior is somewhat emergent. The fact that you can reductively break down GPT and show that each individual component is not intelligent does not imply the whole system is not intelligent -- you can similarly reductively break down the brain into neurons, cells, even atoms, and they aren't intelligent at all. We don't even know where our intelligence resides, and it's one of the greatest mysteries.
Imagine trying to convince an alien species that humans are actually intelligent and sentient. Aliens opens a human brain and looks inside: "Yeah I know these. Cells. They're just little biological machines optimized for reproduction. You say humans are intelligent? But your brains are just cleverly organized cells that handles electric signals. I don't see anything intelligent about that. Unlike us, we have silicon-based biology, which is _obviously_ intelligent."
You sound like that alien.
mplewis|2 years ago
ChatGPT isn’t even a bullshitter when it hallucinates – it simply does not know when to stop. It has no conceptual model that guides its output. It parrots words but does not know things.