(no title)
samuellevy | 2 years ago
ChatGPT made some waves at the end of last year. My in-laws were wanting to talk to (at) me about it at Christmas. There's plenty of awareness outside of the tech circles, but most of the discussion (both out and in of the tech world) seems to miss what LLMs actually _are_.
The reason why ChatGPT was impressive to me wasn't the "realism" of the responses... It was how quickly it could classify and chain inputs/outputs. It's super impressive tech, but like... It's not AI. As accurate as it may ever seem, it's simply not actually aware of what it's saying. "Hallucinations" is a fun term, but it's not hallucinating information, it's just guessing at the next token to write because that's all it ever does.
If it was "intelligent" it would be able to recognise a limitation in its knowledge and _not_ hallucinate information. But it can't. Because it doesn't know anything. Correct answers are just as hallucinatory as incorrect answers because it's the exact same mechanism that produces them - there's just better probabilities.
InvertedRhodium|2 years ago
I don't claim or believe that any LLM is actually intelligent. It just seems that we (at least on an individual basis) can also meet the criteria outlined above. I know plenty of people who are confidently incorrect and appear unwilling to learn or accept their own limitations, myself included.
In my opinion, even if we did have AGI it would still exhibit a lot of our foibles given that we'd be the only ones teaching it.
Quarrelsome|2 years ago
I feel like if you have any belief in philosophy then LLMs can only be interpreted as a parlour trick (on steroids). Perhaps we are fanciful in believing we are something greater than LLMs but there is the idea that we respond using rhetoric based on trying to find reason within in what we have learned and observed. From my primitive understanding, LLMs rhetoric and reasoning is entirely implied based on an effectively (compared to the limitations of human capacity to store information) infinite amount of knowledge they've consumed.
I think if LLMs were equivalent to human thinking then we'd all be a hell of a lot stupider, given our lack of "infinite" knowledge compared to LLMs.
dmarchand90|2 years ago
joshuahedlund|2 years ago
In humans “hallucination” means observing false inputs. In GTP it means creating false outputs.
Completely different with massively different connotations.
samuellevy|2 years ago
"Hallucination" is a term that works well for actual intelligence - when you "know" something that isn't true, and has no path of reasoning, you might have hallucinated the base "knowledge".
But that doesn't really work for LLMs, because there's no knowledge at all. All they're doing is picking the next most likely token based on the probabilities. If you interrogate something that the training data covers thoroughly, you'll get something that is "correct", and that's to be expected because there's a lot of probabilities pointing to the "next token" being the right one... but as you get to the edge of the training data, the "next token" is less likely to be correct.
As a thought experiment, imagine that you're given a book with every possible or likely sequence of coloured circles, triangles, and squares. None of them have meaning to you, they're just colours and shapes that are in random seeming sequences, but there's a frequency to them. "Red circle, blue square, gren triangle" is a much more common sequence than "red circle, blue square, black triangle", so if someone hands you a piece of paper with "red circle, blue square", you can reasonably guess that what they want back is a green triangle.
Expand the model a bit more, and you notice that "rc bs gt" is pretty common, but if there's a yellow square a few symbols before with anything in between, then the triangle is usually black. Thus the response to the sequence "red circle, blue square" is usually "green triangle", but "black circle, yellow square, grey circle, red circle, blue square" is modified by the yellow square, and the response is "black triangle"... but you still don't know what any of these things _mean_.
When you get to a sequence that isn't covered directly by the training data, you just follow the process with the information that you _do_ have. You get "red triangle, blue square" and while you've not encountered that sequence before, "green" _usually_ comes after "red, blue", and "circle" is _usually_ grouped with "triangle, square", so a reasonable response is "green circle"... but we don't know, we're just guessing based on what we've seen.
That's the thing... the process is exactly the same whether the sequence has been seen before or not. You're not _hallucinating_ the green circle, you're just picking based on probabilities. LLMs are doing effectively this, but at massive scale with an unthinkably large dataset as training data. Because there's so much data of _humans talking to other humans_, ChatGPT has a lot of probabilities that make human-sounding responses...
It's not an easy concept to get across, but there's a fundamental difference between "knowing a thing and being able to discuss it" and "picking the next token based on the probabilities gleaned from inspecting terabytes of text, without understanding what any single token means"
f4f4f4f43f|2 years ago
An LLM doesn't have that. It's very impressive parlour trick (and of course a lot more), but it's use is hence limited (albeit massive) to that.
Chaining and context assists resolving that to some extent, but it's a limited extent.
That's the argument anyway, that doesn't mean it's not incredibly impressive, but comparing it to human self-awareness, however small, isn't a fair comparison.
It's next token prediction, which is why it does classification so well.
adrianN|2 years ago
samuellevy|2 years ago
sam_lowry_|2 years ago
Wasn't it the plot of a sci-fi novel by Vernor Vinge or someone at least as popular?
narag|2 years ago
Conflating intelligence and awareness seems to me the biggest confusion around this topic.
When non-technical people ask me about it, I ask them to consider three questions:
- is alive?
- thinks?
- can speak (and understand)?
A plant, microbe, primitive animals... are alive, don't think, can't speak.
A dog, a monkey... are alive, think, can't speak.
A human is alive, thinks, can speak.
These things aren't alive, think, can speak.
I know some of the above will be controversial, but clicks for most people, that agree: if you have a dog, you know what I mean whith "a dog thinks". Not with words, but they're capable intricate reasoning and strategies.
Intelligence can be mechanical, the same as force. For a man from the ancient times, the concept of an engine would have been weird. Only live beings were thought to move on their own. When a physical process manifested complex behaviour, they said that a spirit was behind it.
Intelligence doesn't need awareness. You can have disembodied pieces of intelligence. That's what Google, Facebook, etc. have been doing for a long time. They're AI companies.
It doesn't help with the confusion that speaking is a harder condition than thinking and thinking seems to be harder than being alive: "these things aren't alive so they can't think" but they speak, so...
samuellevy|2 years ago
The problem is that LLMs aren't alive, and they _don't think_. The speaking is arguable.
veidr|2 years ago
They can't speak English like a human, but they both can understand a good deal of English, and they both can speak in their own ways (and understand the speaking of others).
I think the key thing about these LLMs is that they upend the notion that speaking requires thinking/understanding/intelligence.
They can "speak", if you mean emit coherent sentences and paragraphs, really well. But there is no understanding of anything, nor thinking, nor what most people would understand as intelligence behind that speaking.
I think that is probably new. I can't think of anything that could speak on this level, and yet be completely and obviously (if you give it like, an hour of back and forth conversation) devoid of intelligence or thinking.
I think that's what makes people have fantastical notions about how intelligent or useful LLMs are. We're conditioned by the entirety of human history to equate such high-quality "speech" with intelligence.
Now we've developed a slime mold that can write novels. But I think human society will adapt quickly, and recalibrate that association.
Obscurity4340|2 years ago
bondarchuk|2 years ago
Of course it has text input, but if you consider that to be equivalent to sensory perception (which I'd be open to) then a hallucination would mean to act as if something is in the text input when it really isn't, which is not how people use the term.
You could also consider all the input it got during training as its sensory perception (also arguable IMHO), but then a proper hallucination would entail some mistaken classification of the input resulting in incorrect training, which is also not really what's going on I think.
Confabulation is a much more accurate term indeed, going by the first paragraph of wikipedia.
samuellevy|2 years ago
It doesn't matter if the output is correct or not, the process for producing it is identical, and the model has the exact same amount of knowledge about what it's saying... which is to say "none".
This isn't a case of "it's intelligent, but it gets muddled up sometimes". It's more of the case that it's _always_ muddled up, but it's accidentally correct a lot of the time.