top | item 44465331

(no title)

cwalv | 8 months ago

That sounds reasonable, and I don't doubt that it's part of the reason. Still, iiuc the solution to hallucination is that they can essentially train the model to recognize when it "doesn't know", and to say so in that case rather than just puke back the highest probability BS. I.e. it's a training time factor, not inference time, so it's not a fundamental cost issue, but more about priorities.

discuss

order

No comments yet.