illusionist123's comments

illusionist123 | 2 years ago | on: Open Challenges in LLM Research

I think it's not possible to get rid of hallucinations given the structure of LLMs. Getting rid of hallucinations requires knowing how to differentiate fact from fiction. An analogy from programming languages that people might understand is type systems. Well-typed programs are facts and ill-typed programs are fictions (relative to the given typing of the program). To eliminate hallucinations from LLMs would require something similar, i.e. a type system or grammar for what should be considered a fact. Another analogy is Prolog and logical resolution to determine consequences from a given database of facts. LLMs do not use logical resolution and they don't have a database of facts to determine whether whatever is generated is actually factual (or logically follows from some set if facts) or not, LLMs are essentially Markov chains and I am certain it is impossible to have Markov chains without hallucinations.

So whoever is working on this problem, good luck because you have you have a lot of work to do to get Markov chains to only output facts and not just correlations of the training data.

page 1