I'm glad I have chatGPT to turn that image with benchmarks into an accessible table lol. I like claude Code, but their accessibility in anything other than accidental CLI accessibility is frustrating. Try it. Load a screen reader like VoiceOver for Mac (cause I know most programmers use Macs) and go to claude.ai. In the "write your prompt to Claude" box, type something like "What will the weather be like tomorrow?" and press Enter/Return. Try closing your eyes for a good 30 seconds and within those 30 seconds, tell me how you'd know if a reply has been given by the model. Then try the same thing with ChatGPT. I would /love/ to be proven wrong.
Anthropic again running scared of the open weight models which are rapidly catching up to them. Not even Sonnet or Opus isn't going to help with that at all.
It has already happened with the music gen models already. It's only a matter of time when the open weight models will overtake Anthropic.
Expect them to dial up the scaremongering until they IPO. The Claude family of models are their only AI product that is keeping them alive.
curious if the 1m context window will be default available in claude code. if so, that's a pretty big deal: "Sonnet 4.6’s 1M token context window is enough to hold entire codebases, lengthy contracts, or dozens of research papers in a single request. More importantly, Sonnet 4.6 reasons effectively across all that context."
Those hours that with gentle work did frame
The lovely gaze where every eye doth dwell,
Will play the tyrants to the very same
And that unfair which fairly doth excel:
I really don't get these companies posting disingenuous benchmarks. Every time, they pick and choose who to compare against. Not comparing to the latest 5.3-codex is absurd when it's been out a couple of weeks now. Who are they trying to kid?
If you were writing a promotional post for your new model, would you include benchmarks of a competitor that's spanking you across the board? This is marketing.
It’s similar to or better than Opus 4.5 as per benchmarks, while being 2x-3x cheaper, definitely worth it over Opus 4.6, if cost/tokens is the concern.
devinprater|12 days ago
edding360|12 days ago
ChrisArchitect|12 days ago
rvz|12 days ago
It has already happened with the music gen models already. It's only a matter of time when the open weight models will overtake Anthropic.
Expect them to dial up the scaremongering until they IPO. The Claude family of models are their only AI product that is keeping them alive.
throwup238|12 days ago
catigula|12 days ago
dchuk|12 days ago
pkaye|12 days ago
a_void_sky|12 days ago
rishabhaiover|12 days ago
mudkipdev|12 days ago
meetpateltech|12 days ago
hxugufjfjf|12 days ago
deanc|12 days ago
falloon|12 days ago
rvz|12 days ago
People who do not know how reproducible research works.
Any benchmark that is presented by AI labs must be reproduced reliably by someone else independent of that AI lab presenting these results.
Otherwise, not only it is biased, these numbers can be just made up for marketing purposes.
AdamConwayIE|12 days ago
SWE bench for example creates a predictions file and evaluates the results in the harness. Without Codex 5.3 being in the API, it can't.
tomlis|12 days ago
cube2222|12 days ago
- a reasonable improvement over sonnet 4.5, esp. with agentic tool use
- generally worse than opus 4.6
Probably not worth it for coding, but a win for anybody building agentic ai assistants of any sort with Sonnet.
Handy-Man|12 days ago
To remind, Opus 4.5 was SOTA 2-3 weeks ago.