top | item 42585953

(no title)

sc077y | 1 year ago

Actually the rate of hallucination is not constant across the board. For one you're doing a sort of synthesis, not intense reasoning or retrieval with the llm. Second, the problem is segmented into sub problems much like how gpt-o1 or o3 does using CoT. Thus, the risk of hallucinations is significantly lower compared to a zero-shot raw LLM or even a naive RAG approach.

discuss

order

No comments yet.