Obvious political reasons and implications aside, a clear quality gap opened up late last year when Opus 4.5 was released vs. GPT-5. Opus was obviously and demonstrably superior to any GPT-5 tier. The release of GPT-5.2 didn't improve matters, and then Opus 4.6 widened the gap further. Right now talking to GPT-5.2 Pro is 10x slower than chatting with Opus 4.6 and the output returned is, nevertheless, generally lower quality and more "sloppy."What I'm getting at is that this could be, in part, because Claude is genuinely better at this point in time.
virgildotcodes|1 day ago
Codex 5.3 Xhigh > Opus 4.6 in my work to this point.
Hoping for Opus 4.7 or whatever comes next to rectify this as I'm a bit annoyed over having to drop to a lower quality model.
lumirth|1 day ago
XCSme|1 day ago
But for the chat, I feel like ChatGPT got worse and worse.
jetbalsa|1 day ago
verst|1 day ago
sheeshkebab|1 day ago
jackschultz|1 day ago
I remember on the Opus 4.5 release data watching what it can do to my test app I wanted it to build and saying outloud to myself "oh shit" because of how much better it was at the conversation, planning, understanding, and building. Posts like this[0] say similar things, where Opus 4.5 release + Claude Code was the tipping point and the gap is widening and Anthropic has infinite more momentum and going in the better direction with useful models that aren't fully aligned with bad actors.
[0] https://news.ycombinator.com/item?id=46515696
enraged_camel|1 day ago
yesbut|1 day ago
ramraj07|1 day ago
A_D_E_P_T|1 day ago
JLCarveth|1 day ago