(no title)
samuellevy | 2 years ago
GPT isn't making true or false outputs. It's just making outputs. The truthiness or falseness of any output is irrelevant because it has no concept of true or false. We're assigning those values to the outputs ourselves, but like... it doesn't know the difference.
It's like blaming a die for a high or a low roll - it's just doing rolls. It has no knowledge of a good or a bad roll. GPT is like a Rube Goldberg machine for rolling dice that's _more likely_ to roll the number that you want, but really it's just rolling dice.
naniwaduni|2 years ago
Yeah, one way to conceive of the issue is that GPT doesn't know when to shut up. Intuitively, you can kind of understand how this might be the case: the training data reflects when someone did produce output, not when they didn't, which is going to bias strongly toward producing confident output.
A lot of the conversation about GPT hallucinations has felt like an extended rehash of the conversations we've been having out the difference between plausible and accurate machine translations since like, 2016ish.
hnfong|2 years ago
Whenever a human speaks, it's just vibrations of wave molecules, triggered by the mouth and throat, which in turn are controlled by electric signals in the human's neural network. Those neurons, they just make muscles move. They don't have any concept of true of false. At least nobody has found a "true of false" neuron in the brain.
parthianshotgun|2 years ago
thaw13579|2 years ago