I’ve always dwelled over $5 a month subscriptions for iPhone apps due to subscription fatigue. I find myself signing up for $200 AI subscriptions without a moments hesitation.
What do you do with $200/mo subscription to Anthropic? I’d consider myself a power user and I’ve never come close to a rate limit on the $20 subscription.
If you discusses a plan with CC well upfront, covering all integration points where things might go off rail, perhaps checkpoint the plan in a file then start a fresh CC session for coding, then CC is usually going to one shot a 2k-LoC feature uninterrupted, which is very token efficient.
If the plan is not crystal clear, people end up arguing with CC over this and that. Token usage will be bad.
I personally find gemini 2.5 pro and o4.1 mini to handle complexity better than claude code. i was a power user of claude code for a couple months but its bias to action repeatedly led me down the wrong path. what am i missing?
Are there available numbers to support this? Software engineering in the U.S. is well-compensated. $200/mo is a small amount to pay if it makes a big difference in productivity.
Yes, but that doesn't mean they aren't finding real value
The challenge with the bubble/not bubble framing is the question of long term value.
If the labs stopped spending money today, they would recoup their costs. Quickly.
There are possible risks (could prices go to zero because of a loss leader?), but I think anthropic and OpenAI are both sufficiently differentiated that they would be profitable/extremely successful companies by all accounts if they stopped spending today.
So the question is: at what point does any of this stop being true?
The point is that if a minority is prepared to pay $200 per month, then what is the majority prepared to pay? I also don’t think this is such an extreme priority, I also know multiple people in real life with these kinds of selections.
OtherShrezzing|7 months ago
crazylogger|7 months ago
If you discusses a plan with CC well upfront, covering all integration points where things might go off rail, perhaps checkpoint the plan in a file then start a fresh CC session for coding, then CC is usually going to one shot a 2k-LoC feature uninterrupted, which is very token efficient.
If the plan is not crystal clear, people end up arguing with CC over this and that. Token usage will be bad.
lumost|7 months ago
Implicated|7 months ago
vonnik|7 months ago
OccamsMirror|7 months ago
Now I just find myself exasperated at its choices and constant forgetfulness.
rtcoms|7 months ago
smith7018|7 months ago
bicx|7 months ago
wrsh07|7 months ago
The challenge with the bubble/not bubble framing is the question of long term value.
If the labs stopped spending money today, they would recoup their costs. Quickly.
There are possible risks (could prices go to zero because of a loss leader?), but I think anthropic and OpenAI are both sufficiently differentiated that they would be profitable/extremely successful companies by all accounts if they stopped spending today.
So the question is: at what point does any of this stop being true?
jarredkenny|7 months ago
christina97|7 months ago
bakugo|7 months ago