top | item 47050447

Claude Sonnet 4.6

79 points| meetpateltech | 12 days ago |anthropic.com

26 comments

order

devinprater|12 days ago

I'm glad I have chatGPT to turn that image with benchmarks into an accessible table lol. I like claude Code, but their accessibility in anything other than accidental CLI accessibility is frustrating. Try it. Load a screen reader like VoiceOver for Mac (cause I know most programmers use Macs) and go to claude.ai. In the "write your prompt to Claude" box, type something like "What will the weather be like tomorrow?" and press Enter/Return. Try closing your eyes for a good 30 seconds and within those 30 seconds, tell me how you'd know if a reply has been given by the model. Then try the same thing with ChatGPT. I would /love/ to be proven wrong.

edding360|12 days ago

thanks for sharing! just tried it for the first time.. Anthropic should really do better

rvz|12 days ago

Anthropic again running scared of the open weight models which are rapidly catching up to them. Not even Sonnet or Opus isn't going to help with that at all.

It has already happened with the music gen models already. It's only a matter of time when the open weight models will overtake Anthropic.

Expect them to dial up the scaremongering until they IPO. The Claude family of models are their only AI product that is keeping them alive.

throwup238|12 days ago

What are the latest open music models?

catigula|12 days ago

Chinese companies distilling frontier models is certainly a crisis but it isn't one that implies said Chinese companies are anywhere in the 'race'.

dchuk|12 days ago

curious if the 1m context window will be default available in claude code. if so, that's a pretty big deal: "Sonnet 4.6’s 1M token context window is enough to hold entire codebases, lengthy contracts, or dozens of research papers in a single request. More importantly, Sonnet 4.6 reasons effectively across all that context."

pkaye|12 days ago

Above 200k token context they charge a premium. I think its $10/M tokens of input.

rishabhaiover|12 days ago

I am not seeing it on claude-code yet

mudkipdev|12 days ago

What happened to sonnet 5?

meetpateltech|12 days ago

They're probably saving 5 for a bigger leap.

hxugufjfjf|12 days ago

Those hours that with gentle work did frame The lovely gaze where every eye doth dwell, Will play the tyrants to the very same And that unfair which fairly doth excel:

deanc|12 days ago

I really don't get these companies posting disingenuous benchmarks. Every time, they pick and choose who to compare against. Not comparing to the latest 5.3-codex is absurd when it's been out a couple of weeks now. Who are they trying to kid?

falloon|12 days ago

If you were writing a promotional post for your new model, would you include benchmarks of a competitor that's spanking you across the board? This is marketing.

rvz|12 days ago

> Who are they trying to kid?

People who do not know how reproducible research works.

Any benchmark that is presented by AI labs must be reproduced reliably by someone else independent of that AI lab presenting these results.

Otherwise, not only it is biased, these numbers can be just made up for marketing purposes.

AdamConwayIE|12 days ago

There aren't really any of the typical benchmark suites targeting Codex 5.3 because it's still not in the API.

SWE bench for example creates a predictions file and evaluates the results in the harness. Without Codex 5.3 being in the API, it can't.

tomlis|12 days ago

gpt-5.3-codex isn't available via the API yet. Pretty sure they were only testing via API access.

cube2222|12 days ago

So tldr it seems like it's

- a reasonable improvement over sonnet 4.5, esp. with agentic tool use

- generally worse than opus 4.6

Probably not worth it for coding, but a win for anybody building agentic ai assistants of any sort with Sonnet.

Handy-Man|12 days ago

It’s similar to or better than Opus 4.5 as per benchmarks, while being 2x-3x cheaper, definitely worth it over Opus 4.6, if cost/tokens is the concern.

To remind, Opus 4.5 was SOTA 2-3 weeks ago.