You might be thinking of LLM as-a-judge, where one simply asks another LLM to fact-check the response. Indeed that is very unreliable due to LLM hallucinations, the problem we are trying to mitigate in the first place.
TLM is instead an uncertainty estimation technique applied to LLMs, not another LLM model.
No comments yet.