(no title)
ganeshkrishnan | 1 year ago
This becomes a cyclical hallucination problem. The LLM hallucinates and create incorrect graph which in turn creates even more incorrect knowledge.
We are working on this issue of reducing hallucination in knowledge graphs and using LLM is not at all the right way.
sc077y|1 year ago