top | item 47166386

(no title)

oulipo2 | 3 days ago

There's nothing particular about this. This is just what you'd expect from a Language Model trained on large datasets. It reproduces a pattern commonly found in documents

discuss

order

alastairr|3 days ago

Not sure I agree with you. For lower ability models, yes. Claude Opus 4.6 is incredibly capable, so it's odd to me it has this residual 'misspeak' behaviour.

oulipo2|2 days ago

That's the issue, people are anthropomorphizing those models, but... they're all the same (conceptually). They just do random hallucinations, trying to make those hallucination match the "reality" (of their training data) as much as possible