top | item 43365952

(no title)

profchemai | 11 months ago

Criticism feels harsh. Of course models don't know what they don't know. Reporters can have the same biases. They could have worded it better "lowers the probability of hallucinating", but it is correct it helps to guard against it. It's just that it's not a binary thing.

discuss

order

jgalt212|11 months ago

> we made sure to tell the model not to guess if it wasn’t sure

Fair enough, but it's kind of ridiculous that in 2025 this "hack" still helps produce more reliable results.

ijk|11 months ago

Alas, current LLM prompting has a lot of hacks. Half of them are useless, of course, while the other half are critical for success. The trick is: which one is which?