top | item 42328392

(no title)

dschuetz | 1 year ago

I went straight to the "how to fix" section with popcorn in hand and I wasn't disappointed: just add " doubt" layers for self-correction, beginning at the query itself. And then maybe tell the model "do not hallucinate". Sounds like a pun, but I think an AI model actually would take this seriously, because it can't tell the difference.

Context is still a huge problem for AI models, and it's probably still the main reason for hallucinating AIs.

discuss

order

No comments yet.