(no title)
SmooL | 2 years ago
This solution... doesn't prohibit hallucinations? As far as I can tell it only makes them less likely. The AI is still totally capable of hallucinating, it's just less likely to hallucinate an answer to _question X_ if the query includes data that has the answer.
I've been thinking that it might be useful if you could actually _remove_ all the stored facts that the LLM has inside of it. I believe that an LLM that didn't natively know a whole bunch of random trivia facts, didn't know basic math, didn't know much about anything _except_ what was put into the initial query would be valuable. The AI can't hallucinate anything if it doesn't know anything to hallucinate.
How you achieve this practically I have no clue. I'm not sure it's even possible to remove the knowledge that 1+1=2 without removing the knowledge of how to write a python script one could execute to figure it out.
pjc50|2 years ago
They've got a big database of logical reasoning propositions that they have been trying to do a much more formal-logic process with.
hackernewds|2 years ago