The financial market things are over my head and I don't have a dog in the game, but I think "Nobody is replacing salesforce with their internally vibe coded software" is just false? Both taken literally [0] [1] and as denying the general trend. Just in my company we already replaced WMS software subscription with own solution, and I wouldn't be able to write it fast enough and maintain it by myself without the use of Claude Code.
I'd say "Not perfectly or with every edge case handled, but well enough that the CIO reviewing a $500k annual renewal started asking the question “what if we just built this ourselves”" is an accurate description.
I didn't think something could be worse than everyone using Salesforce, but everyone using a different, constantly broken, incompatible SF clone that no one understands may be that.
Agreed. If anything, it puts downward pressure on pricing. Even if the CIO still buys Salesforce or whatever other tool, they won't be willing to pay as much.
I do enjoy a good Ed Zitron sneer. The fact that the original article moved markets says a lot about the critical thinking skills of stock market traders.
You should look into how he destroyed the small indie MMO Darkfall and gave the game 2/10 without ever playing it, in a Eurogamer review a few years ago. The developers had receipts and could prove that he hadn't played it.
It doesn't have any material effect on this article, but it says something about his ethics.
I'm somewhat halfway through the original memo, and I hate the fact that kernels of truth lie here and there. For example, I used to work as a full-stack developer for about 2 years, and now I got forced into what the memo calls "gig economy" just to pay the rent, because companies slowed down hiring junior developers thanks to... Honestly, I don't really care at this point what I have to thank for.
All I know is, whenever I read testimonies from people whose companies suddenly decided to force LLM usage for productivity to be "AI first", having colleagues opening PR's who are only machine reviewed with implementations they cannot justify themselves outside of "Claude wrote it", makes me burnout just reading them. And it's only going to get worse until it becomes better, but not for the developers.
Honestly, the one thing that I could see justify all the investment companies make for LLM-assisted coding is the full automation of software production. I can only see the current state of things as the "end game" for them, only if they suddenly decide to jack up pricing to tap directly on the corporate budget and not the individual developer's budget.
Ed's main thesis is that cost is unsustainable for AI companies but this is clearly wrong.
The unit cost is going down and has gone down by more than 20-30x over the years. Sure, the fixed cost of training is going up but that's because of the implied returns. Once the returns to training don't happen, it would simply reduce modulo cutoff date updates. The companies have a choice to just stop training and focus on inference cost reduction.
What am I missing here? Unless the consumers decide that they are no longer willing to pay the same amount as before and their expectations are rising with prices falling, what else?
I've started to feel like Ed Zitron is actively hurting people I care about.
I'm lucky to have worked in the field for a long time, and be able to spend a lot of tokens. In the last month it's become clear to me that the tech works. The science is done, and what's left is engineering.
There are a lot of risks and mitigations and theory to build, but it's all solvable. The tech isn't mature, but neither was the Internet 30 years ago. And we built transatlantic cables and ran new wires to everyone's house.
People I care about, engineers with 20 years of experience, are having mental health breakdowns, caused by Zitron's work. They insist the tech will never work, and avoid learning about it, becoming progressively more paranoid and isolated. I'm trying to be supportive and help them start to recover, but it's slow going.
If someone is having a crisis about this, I hope they start talking to a therapist. I don't need them to agree with me, but I do need them to not harm themselves.
> They insist the tech will never work, and avoid learning about it, becoming progressively more paranoid and isolated.
They can always learn the technology later, when and if it proves itself to be useful :) I personally don't understand the hype, even after using Claude and other AI tools - but perhaps that will change in the future.
Not sure how this comment got upvoted; calling skepticism of an emerging industry a "mental breakdown" and suggesting those "suffering" from it to talk to a therapist doesn't really clear the bar for discussion here. This reads more like a manager being salty that their team isn't using up all the Grok budget this quarter or whatever.
And let it be clear that nobody is being "actively hurt" by legitimate economic/business grievances. This is victim-blaming and disgusting rhetoric.
There's nothing to recover from, what are you even talking about? I'm not a token user (and I can't make predictions about the future and whether it will force me to use token but still). That the industry is collectively having a delusion about what constitutes good software (in all senses of the word - functionality and consequences for society) is clear to see, something I too fear we might never recover from, but I stand quite clearly on the side of people not of corporations hoping to extract more more more.
> They insist the tech will never work, and avoid learning about it, becoming progressively more paranoid and isolated. I'm trying to be supportive and help them start to recover, but it's slow going.
If you are right, and the tech works, both you and them will be continuing this conversation in a soup kitchen.
> "What if our AI bullishness continues to be right...and what if that’s
actually bearish" - what if pee pee was poo poo
Despite the vulgarity, it is exceptionally illuminating to how much some of these slop pieces are just a mere pretension of rhetoric. I see this pretty consistently with a lot of the material I come across on the job that's gone through the LLM meat-grinder.
Also, the comment made me giggle like a little kid.
What's pretend-rhetoric about it? They're positing agents will prove to be very capable, but that this would ultimately be a bad thing by automating away too much of the economy. You can argue whether that's plausible or not, but it isn't an incoherent or vapid argument.
Only those with no understanding of how multi-nationals compliance work think that replacing Salesforce or Monday with internal development systems, even with AI assistance tooling, is a reasonable use of their engineering's time.
> I've also heard Cory Doctorow recently offer a similarly dismissive view, describing AI as "just statistics".
Well, AI partisans have applied grandiose terms like "thinking," "intelligence," and "soul" to these machines. It's not wrong to push back and remind people what they really are.
Okay, I'm tired of reading the debate about costs going down and therefore Ed is wrong. The cost of running the inference is not the problem. The cost of the input CHIPS is the problem. Let's return to Dario Amodei's interview [0] with Dwarkesh, shall we, for AI Economics 101?
Here goes:
The Epoch data everyone keeps citing measures the price per token charged to API customers. That's the sticker price. It tells you nothing about whether the business is viable, because the existential risk for AI companies isn't the marginal cost of running a query. It's the upfront capital expenditure on chips and datacenters, committed years before you know what demand looks like.
Anthropic CEO Dario Amodei spelled this out in his Dwarkesh interview. Here's the short version:
1. Data centers take 1-2 years to build out.
2. Each gigawatt costs roughly $10-15B per year.
3. The industry is currently at ~10-15 GW, scaling roughly 3x annually.
4. By 2028, ~100 GW. By 2029, ~300 GW.
5. We're talking multiple trillions per year in committed infrastructure spend across the industry.
Now NVIDIA's Q4 earnings [1], which printed today:
1. $68.1B in quarterly revenue, $62.3B from data center alone.
2. Full-year: $215.9B, up 65% YoY. Guiding $78B next quarter.
3. Someone is writing those checks. Those checks are not refundable.
Dario, who believes we're 1-3 years from a "country of geniuses in a data center," described his own demand prediction as a "hellish" problem.
His exact framing: If this revenue comes in at $800B instead of $1T, "there's no force on earth, there's no hedge on earth" that could stop him from going bankrupt if he'd bought compute at the higher projection.
He's at ~$10B annualized revenue today, and he won't commit to buying at the scale his own thesis demands, because being off by a single year is fatal.
This is the actual argument (I'm not saying this is Ed's argument, but this is the argument against these companies). Not "inference tokens are expensive."
The argument is structural: these companies must pre-commit billions in non-recoverable CAPEX based on demand projections that are, by the CEO's own admission, a coin flip.
The gross margins on serving tokens might be great. But the training spend for next-gen models grows exponentially, and it has to be funded before that model earns a dollar.
The Epoch chart measures what customers pay per token. It doesn't measure the $215.9B NVIDIA invoice those customers collectively funded this year, or that these chip purchases are one-way bets against future demand that may or may not materialize.
Inference costs going down 20x is wonderful for consumers. It tells you almost nothing about whether the companies making those chips, or the companies buying them, will survive the demand prediction gauntlet.
And if we're being honest, the Epoch data showing 9x to 900x price drops per year should make you more nervous, not less, because it means the asset you bought last year is depreciating at a rate that makes used cars look like gold bars.
> "Here is an annotated version of the Citrini Memo with my own intro. It is analyslop - scare-fiction written to ingratiate AI boosters and analysts/traders with tales of ultra-automation and socialist data center policies. Shameful that the markets reacted at all."
It’s sort of disappointing to me how on both sides it seems hard to have any sort of rational perspective. I find both the Citrini memo (and the subsequent market reaction) and Ed Zitron’s critique of it to be wildly off-base.
Did you actually read the articles he made going through the finances of these companies? He definitely has a bone to pick, but his numbers don't lie. The amount of return these AIs need to give due to the amount of spend is so ridiculous that unless they really do automate most jobs, they're screwed. There's a reason these companies only post AI revenue now, not profit.
Ed Zitron, from what little I have heard of him, seems incredibly irrational. I don't think I've ever seen anybody stick their head deeper in the sand more than I've seen him do.
It's one thing to dislike or even detest something, but to constantly claim it is worthless and without use when people are already benefitting from it everyday is nothing short of delusion.
BrenBarn|5 days ago
Rarely do I read something that starts off with such promise!
pityJuke|5 days ago
[0]: https://bsky.app/search?q=from%3Aedzitron.com+diaper
arowthway|5 days ago
[0] https://lovable.dev/blog/how-a-startup-replaced-a-salesforce...
[1] https://seekingalpha.com/news/4144652-klarna-shuts-down-sale...
rwmj|5 days ago
Ozzie_osman|5 days ago
Lariscus|5 days ago
davorb|5 days ago
It doesn't have any material effect on this article, but it says something about his ethics.
decimalenough|5 days ago
HN discussion: https://news.ycombinator.com/item?id=47114579
Thanemate|5 days ago
All I know is, whenever I read testimonies from people whose companies suddenly decided to force LLM usage for productivity to be "AI first", having colleagues opening PR's who are only machine reviewed with implementations they cannot justify themselves outside of "Claude wrote it", makes me burnout just reading them. And it's only going to get worse until it becomes better, but not for the developers.
Honestly, the one thing that I could see justify all the investment companies make for LLM-assisted coding is the full automation of software production. I can only see the current state of things as the "end game" for them, only if they suddenly decide to jack up pricing to tap directly on the corporate budget and not the individual developer's budget.
simianwords|5 days ago
The unit cost is going down and has gone down by more than 20-30x over the years. Sure, the fixed cost of training is going up but that's because of the implied returns. Once the returns to training don't happen, it would simply reduce modulo cutoff date updates. The companies have a choice to just stop training and focus on inference cost reduction.
What am I missing here? Unless the consumers decide that they are no longer willing to pay the same amount as before and their expectations are rising with prices falling, what else?
MadxX79|5 days ago
returnInfinity|5 days ago
He has been a perpetual bear
jbreckmckye|5 days ago
His argument is not "this tech doesn't work", but rather "these businesses aren't economically viable"
And that the smoke and mirrors accounting and perpetual thirst for more billions indicates just how unviable it is
Whilst he does dunk on LLM capabilities, the framing is the business angle - can Anysphere etc. actually form a moat and make a profit?
Lariscus|5 days ago
simianwords|5 days ago
jaredcwhite|4 days ago
vv_|5 days ago
luke-stanley|5 days ago
hiddencost|5 days ago
I'm lucky to have worked in the field for a long time, and be able to spend a lot of tokens. In the last month it's become clear to me that the tech works. The science is done, and what's left is engineering.
There are a lot of risks and mitigations and theory to build, but it's all solvable. The tech isn't mature, but neither was the Internet 30 years ago. And we built transatlantic cables and ran new wires to everyone's house.
People I care about, engineers with 20 years of experience, are having mental health breakdowns, caused by Zitron's work. They insist the tech will never work, and avoid learning about it, becoming progressively more paranoid and isolated. I'm trying to be supportive and help them start to recover, but it's slow going.
If someone is having a crisis about this, I hope they start talking to a therapist. I don't need them to agree with me, but I do need them to not harm themselves.
vv_|5 days ago
They can always learn the technology later, when and if it proves itself to be useful :) I personally don't understand the hype, even after using Claude and other AI tools - but perhaps that will change in the future.
AlexeyBelov|3 days ago
palmaltd|4 days ago
And let it be clear that nobody is being "actively hurt" by legitimate economic/business grievances. This is victim-blaming and disgusting rhetoric.
v3xro|5 days ago
lelanthran|4 days ago
If you are right, and the tech works, both you and them will be continuing this conversation in a soup kitchen.
jubalfh|5 days ago
namcheapisdumb|1 day ago
lmfao
relaxing|4 days ago
bsshdjnddn|5 days ago
[deleted]
hresvelgr|5 days ago
Despite the vulgarity, it is exceptionally illuminating to how much some of these slop pieces are just a mere pretension of rhetoric. I see this pretty consistently with a lot of the material I come across on the job that's gone through the LLM meat-grinder.
Also, the comment made me giggle like a little kid.
Jordan-117|5 days ago
LaSombra|4 days ago
Salesforce, SAP, etc exist for a reason.
shimonabi|5 days ago
palmotea|4 days ago
Well, AI partisans have applied grandiose terms like "thinking," "intelligence," and "soul" to these machines. It's not wrong to push back and remind people what they really are.
ChrisArchitect|4 days ago
An AI doomsday report shook US markets
https://news.ycombinator.com/item?id=47138860
chvid|5 days ago
gneuron|4 days ago
Here goes:
The Epoch data everyone keeps citing measures the price per token charged to API customers. That's the sticker price. It tells you nothing about whether the business is viable, because the existential risk for AI companies isn't the marginal cost of running a query. It's the upfront capital expenditure on chips and datacenters, committed years before you know what demand looks like.
Anthropic CEO Dario Amodei spelled this out in his Dwarkesh interview. Here's the short version: 1. Data centers take 1-2 years to build out. 2. Each gigawatt costs roughly $10-15B per year. 3. The industry is currently at ~10-15 GW, scaling roughly 3x annually. 4. By 2028, ~100 GW. By 2029, ~300 GW. 5. We're talking multiple trillions per year in committed infrastructure spend across the industry.
Now NVIDIA's Q4 earnings [1], which printed today: 1. $68.1B in quarterly revenue, $62.3B from data center alone. 2. Full-year: $215.9B, up 65% YoY. Guiding $78B next quarter. 3. Someone is writing those checks. Those checks are not refundable.
Dario, who believes we're 1-3 years from a "country of geniuses in a data center," described his own demand prediction as a "hellish" problem.
His exact framing: If this revenue comes in at $800B instead of $1T, "there's no force on earth, there's no hedge on earth" that could stop him from going bankrupt if he'd bought compute at the higher projection.
He's at ~$10B annualized revenue today, and he won't commit to buying at the scale his own thesis demands, because being off by a single year is fatal.
This is the actual argument (I'm not saying this is Ed's argument, but this is the argument against these companies). Not "inference tokens are expensive."
The argument is structural: these companies must pre-commit billions in non-recoverable CAPEX based on demand projections that are, by the CEO's own admission, a coin flip.
The gross margins on serving tokens might be great. But the training spend for next-gen models grows exponentially, and it has to be funded before that model earns a dollar.
The Epoch chart measures what customers pay per token. It doesn't measure the $215.9B NVIDIA invoice those customers collectively funded this year, or that these chip purchases are one-way bets against future demand that may or may not materialize.
Inference costs going down 20x is wonderful for consumers. It tells you almost nothing about whether the companies making those chips, or the companies buying them, will survive the demand prediction gauntlet.
And if we're being honest, the Epoch data showing 9x to 900x price drops per year should make you more nervous, not less, because it means the asset you bought last year is depreciating at a rate that makes used cars look like gold bars.
[0] https://www.youtube.com/watch?v=n1E9IZfvGMA&t=2298s [1] https://nvidianews.nvidia.com/news/nvidia-announces-financia...
notachatbot123|5 days ago
What is this document?
What is the context?
nielsbot|5 days ago
https://bsky.app/profile/edzitron.com/post/3mfkc63h6222l
> "Here is an annotated version of the Citrini Memo with my own intro. It is analyslop - scare-fiction written to ingratiate AI boosters and analysts/traders with tales of ultra-automation and socialist data center policies. Shameful that the markets reacted at all."
seanhunter|5 days ago
I wish everyone would just calm down a bit.
frozenseven|5 days ago
"AI fake, AI poo poo, AI going away!" is the only argument he ever had. Nothing more.
Moomoomoo309|5 days ago
GaryBluto|5 days ago
It's one thing to dislike or even detest something, but to constantly claim it is worthless and without use when people are already benefitting from it everyday is nothing short of delusion.
000ooo000|4 days ago
That's an interesting way to start criticism about ignorance