top | item 37133141

(no title)

eis | 2 years ago

Isn't that a bit of a holy grail though? If your software can fact check the output of LLMs and prevent hallucinations then why not use that as the AI to get the answers in the first place?

discuss

order

jacquesm|2 years ago

Because you - hopefully - have a check against something that is on average of higher quality than the combined input of an LLM.

I'm not sure if this can work or not but it would be nice to see a trial, you could probably do this by hand if you wanted to by breaking up the answer from an LLM into factoids and then to check each of those individually, and to assign a score to them based on the amount of supporting evidence for the factoid. I'd love that as a plug-in to a browser too.