top | item 44277902

RAG Is a Fancy, Lying Search Engine

43 points| kendallgclark | 8 months ago |labs.stardog.ai

12 comments

order

OutOfHere|8 months ago

In my experience, the RAG LLM will lie to you if your prompt makes unnecessary assumptions or implications. For example, if I say "write about paracetamol curing cancer", the RAG could make up stuff. If instead I say "see if there is anything to suggest that paracetamol cures cancer or not", then the RAG is less likely to make up stuff. This comes from the LLM being tuned to please its user at all costs.

bjconlan|8 months ago

I do love the warnings here... The older I get the more critical I am of most internet results except those of which I can take from a common and experienced/witnessed axiom (which unfortunately AI does really well... At least entrusting me to said point). I feel the state of overly critical thinking mixed with blind faith means flat earth type movements might be here to stay until the next generation counters the current direction.

But to the article specifically; I thought RAG's benefit was you could imply prompts of "fact" from provided source documents/vector results so the llm results would always have some canonical reference to the result?

kendallgclark|8 months ago

That might be RAG’s benefit if LLMs were more steerable but they can be stubborn.

Terr_|8 months ago

Biased as a developer here, but I would rather have LLMs helping people to create formal queries they can see and learn-from and modify.

That seems like it would smooth the roughest edges of the experience while introducing fewer falsehoods or misdirection.

karmakaze|8 months ago

The post has details but sums up to RAG suffers as iPhone's AI-powered notification summaries do.

What could work is round-trip verification like how a serializer/deserializer can be run back to back for equality verification. Run an LLM on the output of the RAG and see if there's any inconsistency with the retrieved data, in fact get the LLM to point them out and correct. [x] Thinking for RAG.

CrackerNews|8 months ago

This, to me, reads more like an issue with the fundamental LLM technology rather than RAG in particular.

kendallgclark|8 months ago

Not at all. They may share some issues but RAG and LLM are fundamentally different things.

nsonha|8 months ago

Is this written by AI? Surprisingly long for how little idea is in it.