top | item 46578701

Anthropic: Developing a Claude Code competitor using Claude Code is banned

321 points| behnamoh | 2 months ago |twitter.com

175 comments

order
[+] ctoth|2 months ago|reply
I was all set to be pissed off, "you can't tell me what I can make with your product once you've sold it to me!" but no... This outrage bait hinges on the definition of "use"

You can use Claude Code to write code to make a competitor for Claude Code. What you cannot do is reverse engineer the way the Claude Code harness uses the API to build your own version that taps into stuff like the max plan. Which? makes sense?

From the thread:

> A good rule of thumb is, are you launching a Claude code oauth screen and capturing the token. That is against terms of service.

[+] whimsicalism|2 months ago|reply
under the ToS, no - you cannot use claude code to make a competitor to claude code. but you’re right that that appears to mostly be unenforced.

that said, it is absolutely being enforced against other big model labs who are mostly banned from using claude.

[+] behnamoh|2 months ago|reply
> You can use Claude Code to write code to make a competitor for Claude Code.

No, the ToS literally says you cannot.

[+] zzzeek|2 months ago|reply
not really. Here's their own product clarifying:

Based on the terms, Section 3, subsection 2 prohibits using Claude/Anthropic's Services:

  "To develop any products or services that compete with our Services, including to develop or train any artificial intelligence or machine learning algorithms or models or resell the Services."

  Clarification:

  This restriction is specifically about competitive use - you cannot use Claude to build products that compete with Anthropic's offerings.

  What IS prohibited:
  - Using Claude to develop a competing AI assistant or chatbot service
  - Training models that would directly compete with Claude's capabilities
  - Building a product that would be a substitute for Anthropic's services

  What is NOT prohibited:
  - General ML/AI development for your own applications (computer vision, recommendation systems, fraud detection, etc.)
  - Using Claude as a coding assistant for ML projects
  - Training domain-specific models for your business needs
  - Research and educational ML work
  - Any ML development that doesn't create a competing AI service

  In short: I can absolutely help you develop and train ML models for legitimate use cases. The restriction only applies if you're trying to build something that would compete directly with Claude/Anthropic's core business.

So you can't use Claude to build your own chatbot that does anything remotely like Claude, which would be, basically any LLM chatbot.
[+] nnutter|2 months ago|reply
Anthropic showed their true colors with their sloppy switch to using Claude Code for training data. They can absolutely do what they want but they have completely destroyed any reason for me to consider them fundamentally better than their competitors.
[+] jimmydoe|2 months ago|reply
I still did not get very clearly what can and can’t zed, open code and other do to use max plan? Developers want to use these 3p client and pay you 200 a month, why are you pissing us off. I understand some abuser exists but you will never really be possible to ban them 100%, technically.

Very poor communication, despite some bit of reasonable intention, could be the beginning of the end for Claude Code.

[+] gbear605|2 months ago|reply
> Developers want to use these 3p client and pay you 200 a month, why are you pissing us off

Presumably because it costs them more than $200 per month to sell you it. It's a loss leader to get you into their ecosystem. If you won't use their ecosystem, they'd rather you just go over to OpenAI.

[+] dathinab|2 months ago|reply
my guess?

they lose money on 200/month plan, maybe even quite a bit. So that plan only exist to subsidize their editor.

Could be about the typical "all must be under our control" power fantasies companies have.

But if there really is "no moat" and open model will be competitive just in a matter of time then having "the" coding editor might be majorly relevant for driving sales. Ironically they seem to already have kind lost that if what some people say about ClaudeCode vs. OpenCode is true...

[+] behnamoh|2 months ago|reply
Honestly I think Claude Code enjoyed an "accidental" success much like ChatGPT; Anthropic engineers have said they never though this thing could catch on.

But being first doesn't mean you're necessarily the best. Not to mention, they weren't the first anyway (aider was).

[+] matt3210|2 months ago|reply
I bet a lot of the tokens go unused each month. The per token cost is pretty high for the api access
[+] d4rkp4ttern|2 months ago|reply
This message from the Zed discord (from a zed staffer) puts it clearly, I think:

“….you can use Claude code in Zed but you can’t hijack the rate limits to do other ai stuff in zed.”

This was a response to my asking whether we can use the Claude Max subscription for the awesome inline assistant (Ctl+Enter in the editor buffer) without having to pay for yet another metered API.

The answer is no, the above was a response to a follow up.

An aside - everyone is abuzz about “Chat to Code” which is a great interface when you are leaning toward never or only occasionally looking at the generated code. But for writing prose? It’s safe to say most people definitely want to be looking at what’s written, and in this case “chat” is not the best interaction. Something like the inline assistant where you are immersed in the writing is far better.

[+] theshrike79|2 months ago|reply
Art (prose, images, videos) is very different from code when discussing AI Agents.

Code can be objectively checked with automated tools to be correct/incorrect.

Art is always subjective.

[+] dathinab|2 months ago|reply
yeah, but no,

I mean they could have put _exactly_ that into their terms of service.

Hijacking rate limits is also never really legal AFIK.

[+] falloutx|2 months ago|reply
Opencode is much better anyway and it doesnt change its workflow every couple weeks.
[+] fgonzag|2 months ago|reply
Yeah, honestly this is a bad move on anthropic's part. I don't think their moat is as big as they think it is. They are competing against opencode + ACP + every other model out there, and there are quite a few good ones (even open weight ones).

Opus might be currently the best model out there, and CC might be the best tool out of the commercial alternatives, but once someone switches to open code + multiple model providers depending on the task, they are going to have difficulty winning them back considering pricing and their locked down ecosystem.

I went from max 20x and chatgpt pro to Claude pro and chat gpt plus + open router providers, and I have now cancelled Claude pro and gpt plus, keeping only Gemini pro (super cheap) and using open router models + a local ai workstation I built using minimax m2.1 and glam 4.7. I use Gemini as the planner and my local models as the churners. Works great, the local models might not be as good as opus 4.5 or sonnet 4.7, but they are consistent which is something I had been missing with all commercial providers.

[+] rglullis|2 months ago|reply
I signed up to Claude Pro when I figured out I could use it on opencode, so I could start things on Sonnet/Opus on plan mode and switch to cheaper models on build mode. Now that I can't do that, I will probably just cancel my subscription and do the dance between different hosted providers during plan phase and ask for a prompt to feed into opencode afterwards.
[+] behnamoh|2 months ago|reply
I like how I can cycle through agents in OpenCode using tab. In CC all my messages get interpreted by the "main" agent; so summoning a specific agent still wastes main agent's tokens. In OpenCode, I can tab and suddenly I'm talking to a different agent; no more "main agent" bs.
[+] whimsicalism|2 months ago|reply
i find cursor cli significantly better than opencode right now, unfortunately.

e: for those downvoting, i would earnestly like to hear your thoughts. i want opencode and similar solutions to win.

[+] otikik|2 months ago|reply
Behold the "Democratization of software development".
[+] llmslave3|2 months ago|reply
I find it slightly ironic that Anthropic benefits from ignoring intellectual property but then tries to enforce it on their competitors.

How would they even detect that you used CC on a competitor? There's surely no ethical reason to not do it, it seems unenforceable.

[+] pixl97|2 months ago|reply
This is the modus operandi of every AI company so far.

OpenAI hoovered up everything they could to train their model with zero shits about IP law. But the moment other models learned from theirs they started throwing tantrums.

[+] forty|2 months ago|reply
They know everything you do with Claude code since everything goes through their servers
[+] falloutx|2 months ago|reply
they just ask LLM to backdoor report if anyone asks to build something they dont want. Its a massive surveillance issue.
[+] throwaw12|2 months ago|reply
Doesn't this make using Claude Agents SDK dangerous?

Suppose I wrote custom agent which performs tasks for a niche industry, wouldn't it be considered as "building a competing service", because their Service is performing Agentic tasks via Claude Code

[+] pton_xd|2 months ago|reply
As long as I have a Claude subscription, why do they care what harness I use to access their very profitable token inference business?
[+] ankit219|2 months ago|reply
Because your subscription depends on the very API business.

Anthropic's cogs is rent of buying x amount of h100s. cost of a marginal query for them is almost zero until the batch fills up and they need a new cluster. So, API clusters are usually built for peak load with low utilization (filled batch) at any given time. Given AI's peak demand is extremely spiky they end up with low utilization numbers for API support.

Your subscription is supposed to use that free capacity. Hence, the token costs are not that high, hence you could buy that. But it needs careful management that you dont overload the system. There is a claude code telemetry which identifies the request as lower priority than API (and probably decide on queueing + caching too). If your harness makes 10 parallel calls everytime you query, and not manage context as well as claude code, its overwhelming the system, degrading the performance for others too. And if everyone just wants to use subscription and you have no api takers, the price of subscription is not sustainable anyway. In a way you are relying on others' generosity for the cheap usage you get.

Its reasonable for a company to unilaterally decide how they monetize their extra capacity, and its not unjustified to care. You are not purchasing the promise of X tokens with a subscription purchase for that you need api.

[+] arjie|2 months ago|reply
Demand-aggregation allows the aggregator to extract the majority of the value. ChatGPT the app has the biggest presence, and therefore model improvements in Claude will only take you so far. Everyone is terrified of that. Cursor et al. have already demonstrated to model providers that it is possible to become commoditized. Therefore, almost all providers are seeking to push themselves closer to the user.

This kind of thing is pretty standard. Nobody wants to be a vendor on someone else's platform. Anthropic would likely not complain too much about you using z.ai in Claude Code. They would prefer that. They would prefer you use gpt-5.2-high in Claude Code. They would prefer you use llama-4-maverick in Claude Code.

Because regardless of how profitable inference is, if you're not the closest to the user, you're going to lose sooner or later.

[+] gbear605|2 months ago|reply
Because Claude Code is not a profitable business, it's a loss leader to get you to use the rest of their token inference business. If you were to pay for Claude Code by using the normal API, it would be at least 5x the cost, if not more.
[+] pjmlp|2 months ago|reply
I still remember the backslash Borland got when they had the clever idea to forbid writing compilers with Borland C++, and naturally had to rollback from.

Some people never learn from history, it seems.

[+] viraptor|2 months ago|reply
"no, it never works for those people... But it may work for us!"
[+] akomtu|2 months ago|reply
AI is built by violating all rules and moral codes. Now they want rules and moral code to protect them.
[+] Imustaskforhelp|2 months ago|reply
This is highly monopolistic action in my opinion from Anthropic which actively feel the most hostile towards developers.

This really shouldn't be the direction Anthropic should even go about. It is such a negative direction to go through and they could've instead tried to cooperate with the large open source agents and talking with them/communicating but they decide to do this which in the developer community is met with criticism and rightfully so.

[+] narmiouh|2 months ago|reply
I wonder how will this affect future Anthropic products, if prior art/products exist that have already been built using claude.

If this is to only limit knowledge distillation for training new models or people Copying claude code specifically or preventing max plan creds used as API replacement, they could properly carve exceptions rather than being too broad which risks turning away new customers for fear of (future) conflict

[+] viraptor|2 months ago|reply
If they think that will work, they're really silly. Is a ToS change going to stop a corp which can gain millions of dollars and can technically work around the protection in hours? Yeah, nah..
[+] zingar|2 months ago|reply
Is this a standard tech ToU item?

Is this them saying that their human developers don’t add much to their product beyond what the AI does for them?

[+] oblio|2 months ago|reply
Imagine if Visual Studio said "you can't use VS to build another IDE".
[+] VoxPelli|2 months ago|reply
Sounds like standard terms from lawyers – not very friendly to customers, very friendly to company – but is it particularly bad here?

I remember when I was part of procuring an analytics tool for a previous employer and they had a similar clause that would essentially have banned us from building any in-house analytics while we were bound by that contract.

We didn't sign.

[+] bionhoward|2 months ago|reply
Yup, I’ve been crowing about these customer noncompetes for years now and it’s clear Anthropic has one of the worst ones. The real kicker is, since Claude Code can do anything, you’re technically not allowed to use it for anything, and everyone just depends on Anthropic not being evil
[+] Centigonal|2 months ago|reply
It seems like Anthropic is taking the Apple approach to these apps. Apple made it hard to mod their hardware, or run other OSs on the hardware, or run their OS on other hardware, or use other app stores on their phones. Basically, they want to make it so that you buy into the Apple stack with one or two all-or-nothing decisions, with very little room for mixing and matching.

Not a very hacker-friendly strategy, but Apple's market cap is pretty big. I think it comes down to whether Anthropic can make a product with enough of a lead over competitors to offset the restrictions.

[+] with|2 months ago|reply
I think there are issues with Anthropic (and their ToS); however, banning the "harnesses" is justified. If you're relying on scraping a web UI or reverse-engineering private APIs to bypass per-token costs, it's just VC subsidy arbitrage. The consumer plan has a different purpose.

The ToS is concerning, I have concerns with Anthropic in general, but this policy enforcement is not problematic to me.

(yes, I know, Anthropic's entire business is technically built on scraping. but ideally, the open web only)

[+] mcintyre1994|2 months ago|reply
https://xcancel.com/SIGKITTEN/status/2009697031422652461

This tweet reads as nonsense to me

It's quoting:

> This is why the supported way to use Claude in your own tools is via the API. We genuinely want people building on Claude, including other coding agents and harnesses, and we know developers have broad preferences for different tool ergonomics. If you're a maintainer of a third-party tool and want to chat about integration paths, my DMs are open.

And the linked tweet says that such integration is against their terms.

The highlighted term says that you can't use their services to develop a competing product/service. I don't read that as the same as integrating their API into a competing product/service. It does seem to suggest you can't develop a competitor to Claude Code using Claude Code, as the title says, which is a bit silly, but doesn't contradict the linked tweet.

I suspect they have this rule to stop people using Claude to train other models, or competitors testing outputs etc, but it is silly in the context of Claude Code.