top | item 47172191

(no title)

bofadeez | 3 days ago

LLMs are designed to fool you into thinking they're right by providing plausible answers.

Stop anthropomorphizing intermediate tokens as "reasoning" when all it can do is rationalize.

E.g. "This test script failed but probably for an unrelated reason. I'll mark it done and move on."

discuss

order

skybrian|3 days ago

The point of the "reasoning" is to generate ideas that might get it unstuck. Ignoring "irrelevant" stuff is one way of getting unstuck.