(no title)
jrpt | 11 months ago
It doesn't sound novel from the article. I built something similar over a year ago. Here's a related example from Langchain "How to get a RAG application to add citations" https://python.langchain.com/docs/how_to/qa_citations/
afro88|11 months ago
They are checking the _generated_ text by trying to find documents containing the facts, then rating how relevant (casually related) those facts are. This is different from looking up documents to generate an answer for a prompt. It's the reverse. Once the answer has been generated they essentially fact check it.