top | item 34603074

(no title)

MaxKeenan | 3 years ago

We've experimented with this a fair bit, very low occurrence of the model making up facts when the answer exists within a document, but users have reported outside information (general knowledge) being shown in their results.

We essentially just prompt GPT-3 to ignore everything that is outside of the chunks of information we provide it.

discuss

order

UltimateEdge|3 years ago

Are there plans to back up the "suggested answer", which I presume is LLM generated, by a definitive source? The first question in the demo returned the relevant document you were looking for, but I didn't see this in the search results for the second question.

I'm not sure I would trust a system like this unless I could click through and see the source of the answer I'm reading, and make sure that the LLM is referencing the correct email/document.

This seems to be a common growing pain in places where an AI model is expected to provide authoritative answers - I wonder if (at least in your case) it's possible to use a more traditional fuzzy search algorithm to attempt to find the source, based on the LLM's answer string.

MaxKeenan|3 years ago

Currently something we're working on with prompt engineering, but love your suggested approach. We'll definitely look into that more -- thanks for sharing.

For now, the search is always generated from the first 5 or so results. So you always have an idea of where it's coming from.