top | item 42555303

(no title)

ganeshkrishnan | 1 year ago

>uses OpenAI for LLM inference and embedding

This becomes a cyclical hallucination problem. The LLM hallucinates and create incorrect graph which in turn creates even more incorrect knowledge.

We are working on this issue of reducing hallucination in knowledge graphs and using LLM is not at all the right way.

discuss

order

sc077y|1 year ago

Actually the rate of hallucination is not constant across the board. For one you're doing a sort of synthesis, not intense reasoning or retrieval with the llm. Second, the problem is segmented into sub problems much like how gpt-o1 or o3 does using CoT. Thus, the risk of hallucinations is significantly lower compared to a zero-shot raw LLM or even a naive RAG approach.