There's nothing particular about this. This is just what you'd expect from a Language Model trained on large datasets. It reproduces a pattern commonly found in documents
Not sure I agree with you. For lower ability models, yes. Claude Opus 4.6 is incredibly capable, so it's odd to me it has this residual 'misspeak' behaviour.
That's the issue, people are anthropomorphizing those models, but... they're all the same (conceptually). They just do random hallucinations, trying to make those hallucination match the "reality" (of their training data) as much as possible
alastairr|3 days ago
oulipo2|2 days ago