(no title)
jasonlfunk | 1 year ago
The only way to know if it did “hallucinate” is to already know the correct answer. If you can make a system that knows when an answer is right or not, you no longer need the LLM!
jasonlfunk | 1 year ago
The only way to know if it did “hallucinate” is to already know the correct answer. If you can make a system that knows when an answer is right or not, you no longer need the LLM!
pvillano|1 year ago
passion__desire|1 year ago
idle_zealot|1 year ago
yard2010|1 year ago
mistercow|1 year ago
keiferski|1 year ago
Wanting to use accurate language isn't exhausting, it's a requirement if you want to think about and discuss problems clearly.
DidYaWipe|1 year ago
baq|1 year ago
https://en.wikipedia.org/wiki/Confabulation
slashdave|1 year ago
criddell|1 year ago
intended|1 year ago
stoniejohnson|1 year ago
Sometimes it is coherent (grounded in physical and social dynamics) and sometimes it is not.
We need systems that try to be coherent, not systems that try to be unequivocally right, which wouldn't be possible.
Jensson|1 year ago
The fact that it isn't possible to be right about 100% of things doesn't mean that you shouldn't try to be right.
Humans generally try to be right, these models don't, that is a massive difference you can't ignore. The fact that humans often fails to be right doesn't mean that these models shouldn't even try to be right.
android521|1 year ago
shiandow|1 year ago
energy123|1 year ago
tbalsam|1 year ago
It's why this arena things are a hard problem. It's extremely difficult to actually know the entropy of certain meanings of words, phrases, etc, without a comical amount of computation.
This is also why a lot of the interpretability methods people use these days have some difficult and effectively permanent challenges inherent to them. Not that they're useless, but I personally feel they are dangerous if used without knowledge of the class of side effects that comes with them.)
scotty79|1 year ago
marcosdumay|1 year ago
The Boolean answer to that is "yes".
But if Boolean logic were a god representation of reality, we would already have solved that AGI thing ages ago. On practice, your neural network is trained with a lot of samples, that have some relation between themselves, and to the extent that those relations are predictable, the NN can be perfectly able to predict similar ones.
There's an entire discipline about testing NNs to see how well they predict things. It's the other side of the coin of training them.
Then we get to this "know the correct answer" part. If the answer to a question was predictable from the question words, nobody would ask it. So yes, it's a definitive property of NNs that they can't create answers for questions like people have been asking those LLMs.
However, they do have an internal Q&A database they were trained on. Except that the current architecture can not know if an answer comes from the database either. So, it is possible to force them into giving useful answers, but currently they don't.
fnordpiglet|1 year ago
yieldcrv|1 year ago
the fact checker doesn’t synthesize the facts or the topic