top | item 47167733

Why Developers Keep Choosing Claude over Every Other AI

65 points| gmays | 4 days ago |bhusalmanish.com.np

79 comments

order

bottlepalm|4 days ago

I don't think vibe coders know the difference, but often when I ask AI to add a feature to a large code base, I already know how I'd do it myself, and the answer that Claude comes up with is more often the one I would have done. Codex and Gemini have burned me too many times, and I keep going back to Claude. I trust it's judgement. Anthropic models have always been a step above OpenAI and Google, even 2 years ago it was like that so it must be something fundamental.

dandiep|4 days ago

For me, Codex does well at pure-coding based tasks, but the moment it involves product judgement, design, or writing – which a lot of my tasks do – I need to pull in Claude. It is like Claude is trained on product management and design, not just coding.

colechristensen|4 days ago

Codex and Gemini don't do as good a job or can't do what I ask them.

The complexity of a project vs. getting lost and confused metric, Claude does a lot better than every time I've tried something else, that's it.

tracker1|3 days ago

I'm there with you, but only been using it a couple months now. I find that as long as I spend a fair amount of time with Claude specifying the work before starting the work, it tends to go really well. I have a general approach on how I want to run/build the software in development and it goes pretty smoothly with Claude. I do have to review what it does and sanity check things... I've tended to find bugs where I expect to see bugs, just from experience.

I keep using the analogy of working with a disconnected overseas dev team over email... since I've had to do this before. The difference is turn around in minutes instead of the next day.

On a current project, I just have it keep expanding on the TODO.md as working through the details... I'd say it's going well so far... Deno driver for MS-SQL using a Rust+FFI library. Still have some sanity checks around pooling, and need to test a couple windows only features (SSPI/Windows Auth and FILESTREAM) in a Windows environment, and I'll be ready to publish... About 3-4 hours of initial planning, 3 hours of initial iteration, then another 1:1:1:1 hours of planning/iteration working through features, etc.

Aside, I have noticed a few times a day, particularly west coast afternoon and early evening, the entire system seems to go 1/3 the speed... I'm guessing it's the biggest load on Anthropic's network as a whole.

quaintdev|4 days ago

Claude is good with code but I've found gemini is good for researching topics.

geldedus|3 days ago

The title is about developers, not vibe coders (no, it is not the same thing)

mrdependable|4 days ago

I use Claude for a few reasons.

1) I don't want to give OpenAI my money. I don't like how they are spending so much money to shape politics to benefit them. That seems to fly in the face of this being a public benefit. If you have to spend money like that because you're afraid of what the public will do, what does that say?

2) I like how Claude just gives me straight text on one side, examples on the other, and nothing else. ChatGPT and Gemini tend to go overboard with tables, lists, emojis, etc. I can't stand it.

3) A lot of technical online conversation seems to have been hollowed out in recent years. The amount of people making blog posts explaining how to use something new has basically tanked.

daxfohl|3 days ago

Wow, I'd always considered claude more of a software tool and never really gave it a chance at regular chat, but yeah after one session I think I'm a convert for exactly #2.

I'm fine with charts, but ChatGPT is so long-winded and redundant. "When would I use such-and-such pattern?" "That's exactly the right question to ask! ... What you're really asking ... Why that's interesting ... Why some people find it critical ... Option 1 ... Option 2 ... Consideration ... Table comparing to so-and-so ... The deep reason ... What it all boils down to ... The one-line answer (tight!) ... The next thing you need to know ... I can also draw a useless picture for you. Would you like me to do that?"

sidrag22|4 days ago

There is also the very lame auto win category that i happen to fall into...

I dont trust openai, or google. google has beyond proven that they aren't trustworthy well before the LLM coding tool era. I am legitimately not even giving them a chance.

Sadly I am assuming anthropic will at some point lose my trust, but for now they just feel like the obvious choice for me.

So obviously i am a terrible overall observer, but i am sure i am not alone in the auto win portion of devs choosing anthropic.

nozzlegear|4 days ago

That was exactly why I had been a paying Anthropic customer as well – I trusted them more than I trusted OpenAI or Google. But I canceled my subscription this morning after the news that they've ditched their core safety promise [†], and they look likely to fold to the Pentagon's demands on autonomous weapons/surveillance as well.

[†] https://www.cnn.com/2026/02/25/tech/anthropic-safety-policy-...

azinman2|4 days ago

I believe anthropic is the only one that lets you opt out of training based on your chats for the developer subscription plans? Is that right?

tracker1|3 days ago

The sometimes hot garbage I get from the AI results for technical questions in the past year or so has me not even considering them from the start... I've tried github copilot (whatever the default engine is) and OpenAI and just found it annoying. Claude is the first one that I've felt was more productive than annoying and I just started using it.

ChrisMarshallNY|4 days ago

I've been using ChatGPT (Thinking). I like how it has learned how I do stuff, and keeps that in mind. Yesterday, I asked it to design an API, and it referenced a file I had sent in, for a different server, days earlier, in order to figure out what to do.

I'm not using it in the same way that many folks do. Maybe if I get to that point, I'll prefer Claude, but for my workflow, ChatGPT has been ideal.

I guess the best part, is that it seems to be the absolute best, at interpreting my requirements; including accounting for my human error.

ryoshu|4 days ago

Oof. I turned the history referencing off. I use ChatGPT for wildly diverging topics and it will bring things up that have zero relevance to what I'm currently looking for if history is on.

el_benhameen|4 days ago

I like this feature and rely on it too. I get that some people hate it and that it can make some pretty insidious mistakes when it uses it, but I’ve found it valuable for providing implicit context when I have multiple queries for the same project.

Worth noting that Claude also has a memory feature and uses it intelligently like this, sometimes more thoughtfully than cgpt does (fewer “out of left field” associations, smoother integration).

tracker1|3 days ago

The past few weeks, Claude has started doing that as well... ie recognizing my preference to use Deno for scripting or React+mui when scaffolding a ui around something.

I've been using the browser/desktop for planning sessions on pieces of a larger project I'm putting together and it's been connecting the dots in unexpected ways from the other conversations.

I think the one disappointment is that I can't seem to resume a conversation from the web/desktop interface to the code interface... I have to have it generate a zip I can extract then work from.

ChadMoran|4 days ago

Model aside, the harness of Claude Code is just a much better experience. Agent teams, liberal use of tasks and small other ergonomics make it a better dev tool for me.

dalenw|4 days ago

I've heard a lot of people prefer OpenCode to Claude Code, myself included. Having tried both, I find myself having a much better time in OpenCode. Have you tried it?

I'll admit it lacks on the agent teams side but I tend to use AI sparingly compared to others on my team.

pinkmuffinere|4 days ago

> Half their agentic usage is coding. When that's your reality, you train for it. You optimize the tool use, the file editing, the multi-step workflows - because that's what your paying users are actually doing. Google doesn't have that same pressure.

I wonder if this is a strategic choice — anthropic has decided to go after the developers, a motivated but limited market. Whereas the general populace might be more attracted to improved search tools, allowing Google/openai/etc to capture that larger market

bonoboTP|4 days ago

They are heavily dogfooding. Coding is needed to orchestrate the training of the next Claude model, data processing, RL environments, evals, scaffolding, UI, APIs, automated experiments, cluster management, etc etc. This allows them to get the next model faster and then get the next one etc.

Making a model that's great at other kinds of knowledge/office work is coincidental, it doesn't feed back directly into improving the model.

sjsjzbbz|4 days ago

They’re doing a lot of dev hostile stuff:

- limiting model access when not using claude code

- claude code is a poorly made product. inefficient, buggy, etc. it shows they don’t read the code

- thousands of open GitHub issues, regressions introduced constantly

- dev hostile changes like the recent change to hide what the agent is actually doing

However, they are very good at marketing and hype. I’d recommend everyone give pi or opencode a try. My guess is anthropic actually wants vibe coders (a much broader market).

WarmWash|4 days ago

It's more likely that anthropic feels that if they can crack just programming, then their agents can rapidly do the legwork of surpassing the other labs.

j2kun|4 days ago

> Google doesn't have that same pressure.

I doubt it. Gemini is heavily used internally for coding with integrations across Google's developer tooling. gemini-cli is not meaningfully different from claude code.

mark_l_watson|4 days ago

Could it be tooling like Claude Code? I just used Claude Code with qwen3.5:35b running locally to track down two obscure bugs in new Common Lisp code I wrote yesterday.

genghisjahn|4 days ago

I use Claude Code as an orchestrator and have the agents use different models:

  product-designer   ollama-cloud / qwen3.5:cloud
  pm                 ollama-cloud / glm-5:cloud
  test-writer        claude-code  / Sonnet 4.6
  backend-builder    claude-code  / Opus 4.6
  frontend-builder   claude-code  / Opus 4.6
  code-reviewer      codex-cli    / gpt-5.1-codex-mini
  git-committer      ollama-cloud / minimax-m2.5:cloud
I use ollama pro $20/month and OpenAI $20/month. I have an Anthropic max plan at $100/month.

smt88|4 days ago

Qwen seems fine for analysis to me, but Opus 4.6 is far better to use as a sounding board or for writing code

anonzzzies|4 days ago

Gemini is supposed to have this huge context; Gemini cli (paid) often forgets by the next prompt whatever the previous was about and starts doing something completely different , often switching natural or programming language. I use codex and with 5.3 it is better but not there compared to cc for us anyway; it just goes looking for stuff, draws the most bizarre conclusions and ends up lost quite often doing the wrong things. Mistral works quite well on smaller issues. Cerebras gml rocks on quick analysis; if it had more token allowance and less rate limiting , it would probably be what I would use all the time; unfortunately, on a large project, I hit a 24 hour block in less than an hour of coding. It does do a LOT in that time of course because of its bizarre speed.

IAmGraydon|4 days ago

Developers prefer Claude because that's their brand, a very intentional choice. If you have a very specific use in mind (like coding), you aren't going to go for the jack of all trades, master of none solution. You're going to go for the coding specialist, which Anthropic has squarely positioned themselves as. Props to them for it - they correctly predicted that LLMs can do many things, but perhaps the most valuable is coding as they're very well suited to it due to the rigidly defined syntax and high cost of engineers.

theanonymousone|4 days ago

Claude the model or Claude (Code) the tool? I'm not sure what to think about an article that doesn't make it clear which one they are talking about...

geor9e|4 days ago

They are talking about Claude Code, the terminal app, which uses Opus and Sonnet for models mainly.

hirvi74|4 days ago

I am torn between Claude and GPT. Though it was recently brought to my attention that I use LLMs in an old-fashioned way [1]. I will say that based on my usages, both models seem very comparable in terms of accuracy and quality. Sometimes one might do things a bit different, but both tend to be more similar than different.

When I am using an LLM for JS, I can't really tell the difference between the two. For C#, I think GPT might produce slightly better quality code, but Claude produces code that seems more modern. I also feel like Claude makes slightly more minor mistakes like forgetting to negate a boolean conditional check.

With Swift, I have found both models to be surprisingly awful. I am not sure if it is because of some of the more recent changes with Swift versions greater than 6.0, but both seem to produce wild results for me.

[1] I do not use Codex CLI nor Claude Code nor any IDE plug-ins. I just type questions into the web app and rarely copy/paste anything back and forth.

a11r|4 days ago

This resonates with my experience. At Morph we use gemini for well specified point coding tasks, and it does very well across millions of lines of code every day. We also use claude code as an engineering tool for our own codebase and it does better at being adaptive and for working on open ended issues.

sampton|4 days ago

I started using Codex 5.3. Compare to Opus 4.6, it's more precise in pinning down bugs and more concise with code. Opus can be best described as distracted and easily agreeable. Codex actually digs deeper for root causes and push back when I'm wrong.

nineteen999|3 days ago

Interesting, I started playing a little with Codex yesterday and it did find some bugs Claude already knew about it, and seemed pretty matter of fact about it. I might have to point it at some of the harder bugs and see how it goes.

pgm8705|4 days ago

I also have always gone back to Claude after trying new models... until GPT-5.3-Codex, specifically with the new Codex Mac app. I've been pretty much full time with it for a few weeks now and have not missed Claude Code. It can over complicate things at times, but for the most part, it is providing working solutions on first go and following coding patterns that already exist in my app. With Claude, it would frequently knock out a feature with acceptable code quality, but be completely broken and require a round of debugging.

I'm even getting by without hitting limits on the $20/month plan, whereas I needed to be on the $100/month one with Claude.

mowmiatlas|4 days ago

Well CC is awesome, there's that.

Codex is awesome too. Opencode is awesome as well. It's so easy to transition from one tool to another especially when one command in project root is what it needs to get up to speed.

But I actually feel like asking Opus to review Codex and vice versa gives me best results. Opus does push back on some reviews comments, sometimes Codex is overselling a feature, but at least to me it feels like I have more points of control, and different perspective even if I could simulate it with two terminal sessions lol

aquir|4 days ago

I’m quite happy with the Codex app.

elevaet|4 days ago

I am too, and haven't really given Anthropic's stuff a fair shake as a result, and am so curious if I'm missing out or if it's the same shit different pile.

scotty79|4 days ago

> The benchmarks will tell you one thing. The developers who use these tools every day will tell you another. Usually, you should listen to the developers.

Isn't it a bit like asking horses what car features they like best?

gadflyinyoureye|4 days ago

Use to love Grok Code Fast 1 because it was free on GHCP. I gave it context but just let it churn on a solution. Claude is far better but a finite resource. I think OpenCode plus GPT-4o might be my next step.

jkukul|4 days ago

I hope you're not too invested into GPT-4o because it has been retired so you'll need to use a different model :)

ahofmann|4 days ago

What is GHCP?

ronsor|4 days ago

GPT-4o is discontinued now

Traubenfuchs|4 days ago

At this point I completely stopped using anything else.

Even for vacation questions or psychotherapy, claude is the best, despite complaining about not receiving a coding task (sometimes).

elevaet|2 days ago

How often do you hit your usage limits with Claude Pro?

TZubiri|4 days ago

Was it ever confirmed that Anthropic did paid astroturfing? Or is this organic?

ghqst|4 days ago

I think this is organic. I've observed the exact same thing over the last week: I tried Google Antigravity and really liked it when I was using my Claude quota. I ran out of Claude quota and tried Gemini 3.1 Pro and it was comparatively terrible at using the tools provided by the IDE. (but it's a useful model in the browser for chat)

scuff3d|4 days ago

Given my own experiences with LLMs, I'm convinced about half the comments on any given thread are just bots told to hype <insert_product_here>

tayo42|4 days ago

Could be both?

I've been leaning more towards claude. A lot of the LLM tropes seem to be ChatGPT really? I feel like claude doesn't do as much of the overly intense "its this -- not that" pattern and isn't constantly acting like my hype man. Claude code has been nice 90% of the time. I haven't tried to many competitors though.

peyton|4 days ago

The article is AI-generated at least.

mosura|4 days ago

Mistral are quietly far better than all the noise would suggest.

geldedus|3 days ago

Because Opus 4.6 it is better than any other AI Coder.

istillcantcode|4 days ago

I prefer Googles. I can only afford the free models. I normally copy and paste my stuff into 4-5 models and compare the responses. Its probably a waste of time, but very mentally satisfying. I mostly program as a form of mental stimulation instead of trying to become a billionaire. Taking this perspective, using AI agents is not really the same experience, and less mentally stimulating than programming.

__alexs|4 days ago

I don't understand quite how Anthropic have managed to get so much mind share for Claude Code given the UX is pretty bad compared to something like Cursor.

tristor|4 days ago

I keep using Claude over other models (through Cursor) because the answers it gives and the plans it creates align with how I would personally approach the same problem if I were doing it myself. The other models /might/ produce a better final result as benchmarked/tested, but how they get there feels like complete nonsense to me.

adithyassekhar|4 days ago

Does anyone know if using claude with opencode violate their new policies?

verdverm|4 days ago

Dev, very happy with Gemini, especially flash

Googles AI products suck hard though

WarmWash|4 days ago

Antigravity is pretty good, gemini CLI is rough though

simianwords|4 days ago

This article is 100% AI generated. I confirmed with pangram.

firmretention|3 days ago

Oh look another AI slop article pontificating on the merits of AI slop generators. It's slop all the way down.

bogzz|4 days ago

[deleted]

davidguetta|4 days ago

I prefer ChatGPT because i can ask it to rewrite entire files with minimal changes in the chat (up to 3k lines) and it will do it.

Every other AI add random opinionated and unwanted stuff