(no title)
Xudong | 1 year ago
Compared with our initial version, we have mainly focused on its efficiency, with a 10X faster checking process without decreasing accuracy.
Xudong | 1 year ago
Compared with our initial version, we have mainly focused on its efficiency, with a 10X faster checking process without decreasing accuracy.
westurner|1 year ago
A 2020 Meta paper [1] mentions FEVER [2], which was published in 2018.
[1] "Language models as fact checkers?" (2020) https://scholar.google.com/scholar?cites=3466959631133385664
[2] https://paperswithcode.com/dataset/fever
I've collected various ideas for publishing premises as linked data; "#StructuredPremises" "#nbmeta" https://www.google.com/search?q=%22structuredpremises%22
From "GenAI and erroneous medical references" https://news.ycombinator.com/item?id=39497333 :
>> Additional layers of these 'LLMs' could read the responses and determine whether their premises are valid and their logic is sound as necessary to support the presented conclusion(s), and then just suggest a different citation URL for the preceding text
> [...] "Find tests for this code"
> "Find citations for this bias"
From https://news.ycombinator.com/item?id=38353285 :
> "LLMs cannot find reasoning errors, but can correct them" https://news.ycombinator.com/item?id=38353285
> "Misalignment and [...]"