"...being entirely blunt, I am an AI skeptic. I think AI and LLM are somewhat interesting but a bit like self-driving cars 5 years ago - at the peak of a VC-driven hype cycle and heading for a spectacular deflation.
My main interest in technology is making innovation useful to people and as it stands I just can't conceive of a use of this which is beneficial beyond a marginal improvement in content consumption. What it does best is produce plausible content, but everything it produces needs careful checking for errors, mistakes and 'hallucinations' by someone with some level of expertise in a subject. If a factory produced widgets with the same defect rate as ChatGPT has when producing content, it would be closed down tomorrow. We already have a problem with large volumes of bad (and deceptive!) content on the internet, and something that automatically produces more of it sounds like a waking nightmare.
Add to that the (presumed, but reasonably certain) fact that common training datasets being used contain vast quantities of content lifted from original authors without permission, and we have systems producing well-crafted lies derived from the sweat of countless creators without recompense or attribution. Yuck!"
I'll be interested to see how long it takes for this "spectacular deflation" to come to pass, but having lived through 3 or so major technology bubbles in my working life, my antennae tell me that it's not far off now...
>can't conceive of a use of this which is beneficial beyond a marginal improvement in content consumption
AlphaFold is having a big influence in medical research. There's more to AI than chatbots.
It's quite interesting the work they are doing there now - article on it https://www.labiotech.eu/in-depth/alpha-fold-3-drug-discover... I've got a personal interest because my sister has ALS and I think an in silico breakthrough is the only thing that could fix that before her dying.
> but everything it produces needs careful checking for errors, mistakes and 'hallucinations' by someone with some level of expertise in a subject
Nah you just post it, if people point out the mistakes the comment is treated as a positive engagement by the algorithm anyway, unfortunately for anyone that cares.
I too am deeply skeptical of the current economic allocation, but it’s typical of frontier expansions in general.
Somehow, in AI, people lost sight of the fact that transformer architecture AI is a fundamentally extractive process for identifying and mining the semantic relationships in large data sets.
Because human cultural data contains a huge amount of inferred information not overtly apparent in the data set, many smart people confused the results with a generative rather than an extractive mechanism.
….To such a point that the entire field is known as “generative” AI, when fundamentally it is not in any way generative. It merely extracts often unseen or uncharacterized semantics, and uses them to extrapolate from a seed.
There are, however, many uses for such a mechanism. There are many, many examples of labor where there is no need to generate any new meaning or “story”.
All of this labor can be automated through the application of existing semantic patterns to the data being presented, and to do so we suddenly do not need to fully characterize or elaborate the required algorithm to achieve that goal.
We have a universal algorithm, a sonic screwdriver if you will, with which we can solve any fully solved problem set by merely presenting the problems and enough known solutions so that the hidden algorithms can be teased out into the model parameters.
But it only works on the class of fully solved problems. Insofar as unsolved problems can be characterized as a solved system of generating and testing hypothesis to solve the unsolved, we may potentially also assail unsolved problems with this tool.
Different algorithms do different things but “generative” AI can certainly come up with new stories and images and with different algorithms AI can work with not fully solved problems like protein folding.
I believe this is a "good" bubble in the sense that the 19th century railroad bubble and original dot com bubble both ended up invested in infrastructure that created immense value.
That said, all of these LLMs are interchangeable, there are no moats, and the profit will almost entirely be in the "last mile," in local subject matter experts applying this technology to their bespoke business processes.
America's railroad boom isn't a great example, it got us the worst rail infrastructure in the world, built by private monopolies solely for maximum short-term profit, i.e. moving freight and not passengers, and now American industry is largely gone and we're stuck with rail infrastructure that is useless to almost everyone and it costs far more to maintain it than it's even worth.
America's internet infrastructure, like the railroads, was also left in the hands of private monopolies and it is also a piece of shit compared to other countries. It's slow and everyone pays far too much for it and many are still excluded from it because it's not profitable enough to run fiber to their area.
The AI bubble won't leave behind any new infrastructure when it bursts. Just millions of burned out GPUs that get sent to an e-waste processing plant where they are ground up into sand, trillions of dollars wasted, many terawatt hours of energy wasted, many billions of liters of freshwater wasted, and the internet being buried under an avalanche of pseudorandomly-generated garbage.
how can massively buying hardware that will have to be thrown away in a few years be a "good" bubble in the sense of being a lasting infrastructure investment?
All over rural New England you'll find abandoned rail lines. Many of these were used for passenger service between walkable towns. Now, Boston area commuter rail sucks big time. Towns now have commercial stips "served" by cars-only "stroads."
How well used do you think those AI data centers are going to be?
I keep saying to people - "if you have a good idea that can make use of large amounts of really really cheap GPUs to do something genuinely useful - get ready for a massive glut of spare capacity". I still haven't thought of anything, unfortunately...
Lots of in-depth analysis, but I think the author is very clearly emotionally invested to the point that they are only drawing conclusions that justify and support their emotions. I agree that we’re in a bubble in the sense that a lot of these companies will go bankrupt, but it won’t be Google or Anthropic (unless Google makes a model that’s an order of magnitude better or order of magnitude cheaper with capability parity). Claude is simply too good at coding in well-represented languages like Python and Typescript to not pay hundreds of dollars a month for (if not thousands, subsidized by employers). These companies are racing to have the most effective agents and models right now. Once the bottleneck is clearly humans’ ability the specify the requirements and context, reducing the cost of the models will be the main competitive edge, and we’re not there yet (although even now the better you are at providing requirements and context, the more effective you are with the models). I think that once cost reduction is the target, Google will win because they have the hardware capabilities to do so.
> Claude is simply too good at coding in well-represented languages like Python and Typescript to not pay hundreds of dollars a month for (if not thousands, subsidized by employers).
I think the cost is more in thousands to cover inference. And, no, I don’t think it’s been proven out that an engineer is so much more productive to justify thousands of dollars a month cost. The models are great for greenfield projects. But a lot of engineering is iterating and maintaining an existing code base——a code base that the engineer is fluent in. So the time savings is writing code specific enough to implement a new feature vs writing a prompt specific enough that the AI can write code specific enough to implement a new feature. The difference between those two tasks is the time savings.
Say that difference is like 10%. You save 10% of your time by using AI, meaning you have 4 more hours a week than you did before. Are you going to spend 4 more hours writing code? No. Some will be spent in meetings. Some will be spent reading Hacker News. Maybe you’ll get two hours a week of additional coding time. So you’re really only increasing your output by 5%.
The so the employer gets 5% more from you if you have AI. If your salary is 10k per month, they wouldn’t pay more than $500. Per month. And you’re probably costing Anthropic >$10k in inference costs per _week_. The economics just don’t make sense.
You can sub out the numbers here and play around with the scenario. I think the cost of inference needs to drastically fall. And I don’t think that happens soon. What might happen 10 years from now is developers are given a laptop with a built-in GPU for AI inference that does much better code auto-complete using AI. That’s something an employer can pay 3k-5k for _once_ as a hardware investment. But the future of AI coding won’t be agents. It won’t be prompt-engineering. The models aren’t going to get much better. It will be simple and standard and useful but unimpressive. It’s going to feel boring. It’s going to feel boring. When it’s working, when it’s mature, when it becomes economical, it always feels boring. And that’s a good thing.
OpenAI was arguably an oom ahead at one point, and competitors caught up in about a year. So I’m not sure even an advantage like that is insurmountable. Like we saw with Anthropic, you just need a group of key researchers to leave the incumbent and start their own thing—they’ll then have a pretty good shot at catching up.
It is not clear at all if Claude will actually be profitable, are there enough people who will actually pay the subsidized costs especially if they end up being a significant fraction of an additional dev's salary.
I think the author's take is overly bleak. Yes, he supports his claim that AI businesses are currently money pits and unsustainable. But I don't think it's reasonable to claim that AI can't be profitable. This whole thing is moving so extremely fast. Models are getting better by the month. Cost is rapidly coming down. We broadly speaking still don't know how to apply AI.
I think it's hubris to claim that, in the wake of this whole bubble noone will figure out how to use AI to provide value and noone will be profitable.
It is not about profitability alone, but whether benefits are net positive for society over the long term.
Profitability is easy with current standards. Get the users. Make them dependent. Increase the price. Make AI mandatory. List goes on.
We are pretty much plateauing in base model performance since gpt4. It's mostly tooling and integration now. The target is also AGI so no matter your product you will get measured on your progress towards it. With new "sota" models popping up left and right you also have no good way of user retention because the user is mostly interested in the models performance not the funny meme generator you added. looking at you openai...
"They called me bubble boy..." - some dude at Deutsche.
Are we in a bubble that's going to pop and take a large part of the economy with it? Almost certainly. Does it mean that the AI is a scam? Not really. After all, the Internet did not disappear after the dotcom burst, and, actually, almost everything we were promised by the dotcoms became reality at some point.
"doing everything on the internet" definitely worked out, but I don't see why that implies "GPU accelerated LLMs will replace large swathes of human labor" will also be true
Worth noting that the essay acknowledges that there are ways that people use this stuff and actually like it. Saying it's a scam is about those uses being orders of magnitude less valuable than the companies involved (and credulous media) claim, and even orders of magnitude less than the amount of money that they are actively investing in this stuff. Saying it's a bubble is not a claim that it will go away entirely and never be seen again, it's a claim that reality will eventually manifest and result in massive upheaval as companies go bankrupt, valuations plummet, and associated downstream effects.
See, the problem when making predictions is that the timeframe is effectively the prediction. I don't know what will happen. When I saw GPT-3 I thought it was hot garbage and never took it seriously. As a result I now have large error bars about what the future holds.
What we got from the Internet was some version of the original promises, on a significantly longer timescale, mostly enabled by technology that didn't exist at the time those promises were made. "Directionally correct" is a euphemism for "wrong".
> almost everything we were promised by the dotcoms became reality at some point.
remember the blockchain bubble? used much blockchain lately? are blockchains changing anything?
> Outside of OpenAI, Anthropic and Anysphere (which makes AI coding app Cursor), there are no Large Language Model companies — either building models or services on top of others' models — that make more than $500 million in annualized revenue (meaning month x 12), and outside of Midjourney ($200m ARR) and Ironclad ($150m ARR), according to The Information's Generative AI database, and Perplexity (which just announced it’s at $150m ARR), there are only twelve generative AI-powered companies making $100 million annualized (or $8.3 million a month) in revenue. Though the database doesn't have Replit (which recently announced it hit $100 million in annualized revenue), I've included it in my calculations for the sake of fairness.
I think this is among the most unhinged paragraphs I've ever read in my entire life. It deeply, metaphysically, struggles to frame what its presenting in a bad light, but the data is so overwhelmingly positive that it just can't do it.
"Ugh, there's only twelve companies basically none of which existed two years ago making over a hundred million dollars in revenue. What a failure of an industry. And only three of them are making a half a billion? What utter failures. See, no one is using any of this stuff!!"
The bubble will pop, just like the web bubble popped; and that’s going to suck. AI technologies will remain and be genuinely transformative, just like the web remained and was transformative (for good and ill).
The nasdaq composite p/e at its peak during the dotcom bubble breached 200.
Today we're at 40, and Nvidia alone is at 49.
As much as everyone wants this to be a bubble: it isn't. ChatGPT was the fastest "thing" in history to reach 100M MAUs, and is believed to be a top 5 most visited website today, across the entire internet. Cursor was the fastest company in human history to reach $500M in revenue. Midjourney, the company no one talks about anymore, is profitable and makes over $200M in revenue.
Being brutal here: HackerNews is in the bubble. Yeah, there's some froth, there's some overvaluation, some of these companies will die. But I seriously do not understand how people can see these real, hard statistics, not fake money like VC dollars or the price of bitcoin but fucking deep and real shit and still say "nah its like crypto all over again".
48% of survey respondents to a recent survey said they've used ChatGPT for therapy [1]. FORTY EIGHT PERCENT. There is no technology humanity has ever invented that has seen genpop uptake this quickly, and its not dropping. This is not "oh, well, the internet will be popular soon, throw money at it people will eventually come". This is: "we physically cannot buy enough GPUs to satisfy demand, our services keep going down every week because so many people want to pay for this".
The irony is that I asked ChatGPT to make a summary in french. However, i'm tired of the AI bubble and seeing half of my twitter feed filled w AI announcements and threads
I like categorize AI outputs by prompt + context input information size vs output information size.
Summaries: output < input. It’s pretty good at this for most low-to-medium stakes tasks.
Translate: output ≈ input but in different format/language. It’s decent at this, but requires more checking.
Generative expansion: output > input. This is where the danger is. Like asking for a cheeseburger and it infers a sesame seed bun because that matches its model of a cheeseburger. Generally that’s fine. Unless you’re deathly allergic to sesame seeds. Then it’s a big problem. So you have to be careful in these cases. And, at best, the anything inferred beyond the input is average by definition. Hence AI slop.
Hey, it's the guy who has been predicting the imminent collapse of AI for three years now! As I understand, he's a former video game journalist and being anti-AI is now his full-time thing. Saying it's all useless, fake, evil, etc.
What's clear is that the hype has reach such a critical mass that people are comfortable enough to publicly and shamelessly extrapolate extraordinary claims based purely on gut feeling. Both here on HN and by complete laymen elsewhere.
A while back I saw a comment here that was basically like, we don't need to worry about copyrights because they won't matter once LLMs are able to create an OS from scratch for the cost of tokens. Which is just a batshit insane not thought through stance to have.
But then I think about the real actual planning decisions that were made based on the claims about driving cars and Hyperloop being available "soon" that made people materially worse off due to differed or canceled public transportation infrastructure.
> people are comfortable enough to publicly and shamelessly extrapolate extraordinary claims based purely on gut feeling
What's the problem with that? Why shouldn't people feel comfortable sharing their vision of the future, even if it's just a "gut feeling" vision? We're not going to run out of ink.
This is quite refreshing to read, while I would classify myself more in the group of “optimists”, I do believe there is a severe lack of skepticism, and those that share negative or more conservative views are indeed held to different standard to those who paint themselves as "optimists". Unlike other trends before, the wave of grifters in the AI space is atounding, anything can be “AI-powered” as long as its a wrapper/ chatbot
The analysis is just bogus. He is basically comparing two years of inflated AI capex estimates to a low-ball estimate of one year of trailing revenue.
Let's unpack that a bit.
Capex is spending on capital goods, with the spending being depreciated over the expected lifetime of the good. You can't compare a year of capex to a year of revenue: a truck doesn't need to pay for itself in year 1, it needs to pay for itself over 10 or 20 years. The projected lifetime of datacenter hardware bought today is probably something like 5-7 years (changes to the depreciation schedule are often flagged in earnings releases, so that's a good source for hard data). The projected lifetime of a new datacenter building is substantially longer than that.
Somehow Zitron manages to not make a comparison that's even more invalid than comparing one year of Capex to one year of revenue: he basically ends up comparing a year of revenue to two years of Capex. So now the truck needs to pay for itself in six months.
They way you'd need to think about this is to for example consider what the return on the capital goods bought in 2024 was in 2025. But that's not what's happening here. Instead the article is basically expecting a GPU that's to be paid for and installed in late 2025 to produce revenue in early 2025. That's not going to happen. In a stable state, this would not matter so much. But this is not a stable state. Both capex and revenue are growing rapidly, and revenue will lag behind.
What about the capex being inflated and the revenue being low-balled?
None of us really know for sure how much of the capex spending is on things one might call AI. But the pre-AI capex baseline of these companies was tens of billions each. Probably some non-AI projects no longer happen so that the companies can plow more money into AI capex, but it absolutely won't be all of it like the article assumes. As another example, why in the world is Tesla being included in the capex numbers? It's just blatant and desperate padding of the numbers.
As for the revenue, this is mostly analyst estimates rather than hard data (with the exception of Microsoft, though Zitron is misrepresenting the meaning of run rate). Given what he has to say about analysts elsewhere, seems odd to trust them here. But more importantly, they are analyst estimates of a subset of the revenue that GPUs/TPUs would produce. What happens when Amazon buys a GPU? Some of those GPUs will be used internally. Some of them will be used to provide genai API services. Some might be used to provide end-user AI proucts. And some of them will be rented out as GPUs. Only the two middle ones would be considered AI revenue.
I don't know what the fair and comparable numbers would be, am not aware of a trustworthy public source, and won't even try to guess at them. But when we don't know what the real numbers are, the one thing we should not do is use obviously invalid ones and present them as facts.
> I am only writing with this aggressive tone because, for the best part of two years,
Zitron's entire griftluencer schtick has always been writing aggressive and often obscenity-laden diatribes. Anyway, please don't forget to subscribe for just $7/month, and remember that he just loves to write and has no motive for clickbait or stirring up some outrage.
> he basically ends up comparing a year of revenue to two years of Capex
He appears to only be doing that for the seven companies cumulatively, and in each company's case is only comparing year with year.
> Both capex and revenue are growing rapidly, and revenue will lag behind.
Even if his capex estimates are inflated, unless they're off by magntitudes, isn't the ratio between the two figures still alarming? What was, say, Amazon's initial capex for AWS compared to the revenue? Or in any other cases where long-term investment bore fruit?
> What happens when Amazon buys a GPU?
What else are they using GPUs for? Luna cloud gaming? Crypto mining?
I’m not accusing you of anything, just giving the feedback that this line makes your post sound like it is AI slop. This is an extremely typical phrase when you prompt any current AI with some variation of “explain this post”. Honestly, the verbosity of the rest of your post also reinforces this signal. The typo here also indicates cutting and pasting things together “Given what he has to say about . But more importantly,”
If it is not AI slop, then hopefully you can use this feedback for future writing.
SoftBank is also more cautious and the "$500 billion" Stargate project that was hyped in the White House will just build a single data center by the end of 2025:
Best rant I have read in such a long time. Subscribed despite the fact that I am all-in on AI for coding (plus much more) and disagree completely with the author's point of view.
With current LLMs, my productivity is increased by at least 50%. This will only get better over time as efficiency is gained and hardware gets cheaper.
How are you measuring your productivity? There are studies [0][1] that indicate it's common for people to self-assess that their productivity using LLMs increased by 20-40%, when in fact it decreased based on objective, controlled measures.
Shorting markets is kind of a specialised skill. Judging from the dot com crash you want to wait till there's a substantial drop and people start talking about a crash, then it'll bounce back about half the way and that's when.
make a technology very affordable, get people hooked. Then when LLM have basically destroyed the open web, charge more for accessing and searching that wealth of human created knowledge. Profit $$$
Ethical approach? hell no. What do you expect from an unregulated capitalistic system.
These sound very much in tone like the criticisms of Web 1.0
AI/LLMs are an infant technology, it’s at the beginning.
It took many many years until people figured out how to use the internet for more than just copying corporate brochures into HTML.
I put it to you that the truly valuable applications of AI/LLMs are yet to be invented and will be truly surprising when they come (which they must of course otherwise we’d invent them now).
Amdahl says we tend to overestimate the value of a new technology in the short term and underestimate it in the long term. We’re in the overestimate phase right now.
So I’d say ignore the noise about AI/LLMs now - the deep innovations are coming.
> It took many many years until people figured out how to use the internet for more than just copying corporate brochures into HTML.
It was immediately clear for many people how it could be used to express themselves. It took a lot of years to figure out how to kill most of those parts and turn the remainder into a corporate hellscape thats barely more than corporate brochures.
His comments about Apple ring true to my ears. Apple is definitely lagging behind in the "AI" world, but that is really what they tend to do. They aren't the first company but they are usually the best. Historically, they wait until everyone else makes the mistakes and then introduce something better. I guess they felt like they couldn't wait for the "AI" trend to blow over; probably because Siri is just not very good.
I think that Apple will hold on to their "AI" stuff for a while longer and wait until it really dies down. Then they will introduce a much better Siri and get rid of the "summarize your email" and "re-write this sentence" bullshit.
>I have written hundreds of thousands of words with hundreds of citations, and still, to this day, there are people who claim I am somehow flawed in my analysis...
Says the PR guy who discovered AI a couple of years ago and now knows it all and that all the AI experts are wrong.
I mean it's a good rant but I don't think he gets the bigger picture.
What a nasty dismissal, "he's not a tech guy anyways, he could never understand anything surrounding AI".
Quoting the end of the article ad verbatim:
> And remember that you, as a regular person, can understand all of this. These people want you to believe this is black magic, that you are wrong to worry about the billions wasted or question the usefulness of these tools.
You are currently being "these people".
You don't need a huge technical baggage to understand that OpenAI still operates at a loss, and that there are at the very least some risks to consider before trying to rebuild all of society on it.
I've seen many people on HN (or maybe it was also you the other times) give this same reply again and again, "what do you know? You've not made your research, and if you made research, you don't have reliable sources, and if you have reliable sources, you're not seeing the bigger picture, and if you are seeing the bigger picture, you're not a tech guy, so what do you know?"
This essentially comes back to what the article also says, you are somehow held to crazy fucking standards if you ever say anything remotely critical, and then people will come up in HN threads and say "the human brain is basically also autocomplete, so genAI will be as good as the human brain soon™" (hey, according to your reply, shouldn't people be experts in the human brain to be able to post stuff like this?)
> Military contracts. They need to live on government money to sustain growth.
Meta makes 99% of its revenue from advertising (according to the article). Google, similarly, makes most of its money from advertising.
Tesla makes money by selling cars (there's no indication the government is going to transform their fleets to Tesla vehicles; in fact, they're openly hostile to EVs).
Apply needs to rely on US government military contracts for continued growth? What?
Amazon, the company that sells toothpaste and cloud services needs to rely on US government military contracts?
wulfstan|7 months ago
"...being entirely blunt, I am an AI skeptic. I think AI and LLM are somewhat interesting but a bit like self-driving cars 5 years ago - at the peak of a VC-driven hype cycle and heading for a spectacular deflation.
My main interest in technology is making innovation useful to people and as it stands I just can't conceive of a use of this which is beneficial beyond a marginal improvement in content consumption. What it does best is produce plausible content, but everything it produces needs careful checking for errors, mistakes and 'hallucinations' by someone with some level of expertise in a subject. If a factory produced widgets with the same defect rate as ChatGPT has when producing content, it would be closed down tomorrow. We already have a problem with large volumes of bad (and deceptive!) content on the internet, and something that automatically produces more of it sounds like a waking nightmare.
Add to that the (presumed, but reasonably certain) fact that common training datasets being used contain vast quantities of content lifted from original authors without permission, and we have systems producing well-crafted lies derived from the sweat of countless creators without recompense or attribution. Yuck!"
I'll be interested to see how long it takes for this "spectacular deflation" to come to pass, but having lived through 3 or so major technology bubbles in my working life, my antennae tell me that it's not far off now...
tim333|7 months ago
AlphaFold is having a big influence in medical research. There's more to AI than chatbots.
It's quite interesting the work they are doing there now - article on it https://www.labiotech.eu/in-depth/alpha-fold-3-drug-discover... I've got a personal interest because my sister has ALS and I think an in silico breakthrough is the only thing that could fix that before her dying.
whywhywhywhy|7 months ago
Nah you just post it, if people point out the mistakes the comment is treated as a positive engagement by the algorithm anyway, unfortunately for anyone that cares.
K0balt|7 months ago
Somehow, in AI, people lost sight of the fact that transformer architecture AI is a fundamentally extractive process for identifying and mining the semantic relationships in large data sets.
Because human cultural data contains a huge amount of inferred information not overtly apparent in the data set, many smart people confused the results with a generative rather than an extractive mechanism.
….To such a point that the entire field is known as “generative” AI, when fundamentally it is not in any way generative. It merely extracts often unseen or uncharacterized semantics, and uses them to extrapolate from a seed.
There are, however, many uses for such a mechanism. There are many, many examples of labor where there is no need to generate any new meaning or “story”.
All of this labor can be automated through the application of existing semantic patterns to the data being presented, and to do so we suddenly do not need to fully characterize or elaborate the required algorithm to achieve that goal.
We have a universal algorithm, a sonic screwdriver if you will, with which we can solve any fully solved problem set by merely presenting the problems and enough known solutions so that the hidden algorithms can be teased out into the model parameters.
But it only works on the class of fully solved problems. Insofar as unsolved problems can be characterized as a solved system of generating and testing hypothesis to solve the unsolved, we may potentially also assail unsolved problems with this tool.
tim333|7 months ago
frithsun|7 months ago
That said, all of these LLMs are interchangeable, there are no moats, and the profit will almost entirely be in the "last mile," in local subject matter experts applying this technology to their bespoke business processes.
bjornnn|7 months ago
America's internet infrastructure, like the railroads, was also left in the hands of private monopolies and it is also a piece of shit compared to other countries. It's slow and everyone pays far too much for it and many are still excluded from it because it's not profitable enough to run fiber to their area.
The AI bubble won't leave behind any new infrastructure when it bursts. Just millions of burned out GPUs that get sent to an e-waste processing plant where they are ground up into sand, trillions of dollars wasted, many terawatt hours of energy wasted, many billions of liters of freshwater wasted, and the internet being buried under an avalanche of pseudorandomly-generated garbage.
dinkblam|7 months ago
how can massively buying hardware that will have to be thrown away in a few years be a "good" bubble in the sense of being a lasting infrastructure investment?
Zigurd|7 months ago
How well used do you think those AI data centers are going to be?
miltonlost|7 months ago
wulfstan|7 months ago
0x000xca0xfe|7 months ago
hotpotat|7 months ago
UltraLutra|7 months ago
I think the cost is more in thousands to cover inference. And, no, I don’t think it’s been proven out that an engineer is so much more productive to justify thousands of dollars a month cost. The models are great for greenfield projects. But a lot of engineering is iterating and maintaining an existing code base——a code base that the engineer is fluent in. So the time savings is writing code specific enough to implement a new feature vs writing a prompt specific enough that the AI can write code specific enough to implement a new feature. The difference between those two tasks is the time savings.
Say that difference is like 10%. You save 10% of your time by using AI, meaning you have 4 more hours a week than you did before. Are you going to spend 4 more hours writing code? No. Some will be spent in meetings. Some will be spent reading Hacker News. Maybe you’ll get two hours a week of additional coding time. So you’re really only increasing your output by 5%.
The so the employer gets 5% more from you if you have AI. If your salary is 10k per month, they wouldn’t pay more than $500. Per month. And you’re probably costing Anthropic >$10k in inference costs per _week_. The economics just don’t make sense.
You can sub out the numbers here and play around with the scenario. I think the cost of inference needs to drastically fall. And I don’t think that happens soon. What might happen 10 years from now is developers are given a laptop with a built-in GPU for AI inference that does much better code auto-complete using AI. That’s something an employer can pay 3k-5k for _once_ as a hardware investment. But the future of AI coding won’t be agents. It won’t be prompt-engineering. The models aren’t going to get much better. It will be simple and standard and useful but unimpressive. It’s going to feel boring. It’s going to feel boring. When it’s working, when it’s mature, when it becomes economical, it always feels boring. And that’s a good thing.
o_nate|7 months ago
danenania|7 months ago
cwmma|7 months ago
jakobnissen|7 months ago
cobertos|7 months ago
nicce|7 months ago
BoredPositron|7 months ago
"They called me bubble boy..." - some dude at Deutsche.
hopelite|7 months ago
ddddang|7 months ago
[deleted]
usrnm|7 months ago
Palomides|7 months ago
topaz0|7 months ago
qsort|7 months ago
What we got from the Internet was some version of the original promises, on a significantly longer timescale, mostly enabled by technology that didn't exist at the time those promises were made. "Directionally correct" is a euphemism for "wrong".
skeezyboy|7 months ago
827a|7 months ago
I think this is among the most unhinged paragraphs I've ever read in my entire life. It deeply, metaphysically, struggles to frame what its presenting in a bad light, but the data is so overwhelmingly positive that it just can't do it.
"Ugh, there's only twelve companies basically none of which existed two years ago making over a hundred million dollars in revenue. What a failure of an industry. And only three of them are making a half a billion? What utter failures. See, no one is using any of this stuff!!"
jjjggggggg|7 months ago
thoroughburro|7 months ago
troupo|7 months ago
(With a caveat that LLMs actually do have their uses)
827a|7 months ago
Today we're at 40, and Nvidia alone is at 49.
As much as everyone wants this to be a bubble: it isn't. ChatGPT was the fastest "thing" in history to reach 100M MAUs, and is believed to be a top 5 most visited website today, across the entire internet. Cursor was the fastest company in human history to reach $500M in revenue. Midjourney, the company no one talks about anymore, is profitable and makes over $200M in revenue.
Being brutal here: HackerNews is in the bubble. Yeah, there's some froth, there's some overvaluation, some of these companies will die. But I seriously do not understand how people can see these real, hard statistics, not fake money like VC dollars or the price of bitcoin but fucking deep and real shit and still say "nah its like crypto all over again".
48% of survey respondents to a recent survey said they've used ChatGPT for therapy [1]. FORTY EIGHT PERCENT. There is no technology humanity has ever invented that has seen genpop uptake this quickly, and its not dropping. This is not "oh, well, the internet will be popular soon, throw money at it people will eventually come". This is: "we physically cannot buy enough GPUs to satisfy demand, our services keep going down every week because so many people want to pay for this".
[1] https://sentio.org/ai-blog/ai-survey
bibelo|7 months ago
CharlesXY|7 months ago
UltraLutra|7 months ago
I like categorize AI outputs by prompt + context input information size vs output information size.
Summaries: output < input. It’s pretty good at this for most low-to-medium stakes tasks.
Translate: output ≈ input but in different format/language. It’s decent at this, but requires more checking.
Generative expansion: output > input. This is where the danger is. Like asking for a cheeseburger and it infers a sesame seed bun because that matches its model of a cheeseburger. Generally that’s fine. Unless you’re deathly allergic to sesame seeds. Then it’s a big problem. So you have to be careful in these cases. And, at best, the anything inferred beyond the input is average by definition. Hence AI slop.
frozenseven|7 months ago
A poor man's Gary Marcus, basically.
alkyon|7 months ago
Thank you for your imput!
elktown|7 months ago
AI-optimist or not, that's just shocking to me.
cwmma|7 months ago
But then I think about the real actual planning decisions that were made based on the claims about driving cars and Hyperloop being available "soon" that made people materially worse off due to differed or canceled public transportation infrastructure.
falcor84|7 months ago
What's the problem with that? Why shouldn't people feel comfortable sharing their vision of the future, even if it's just a "gut feeling" vision? We're not going to run out of ink.
29ebJCyy|7 months ago
ddddang|7 months ago
[deleted]
CharlesXY|7 months ago
jsnell|7 months ago
Let's unpack that a bit.
Capex is spending on capital goods, with the spending being depreciated over the expected lifetime of the good. You can't compare a year of capex to a year of revenue: a truck doesn't need to pay for itself in year 1, it needs to pay for itself over 10 or 20 years. The projected lifetime of datacenter hardware bought today is probably something like 5-7 years (changes to the depreciation schedule are often flagged in earnings releases, so that's a good source for hard data). The projected lifetime of a new datacenter building is substantially longer than that.
Somehow Zitron manages to not make a comparison that's even more invalid than comparing one year of Capex to one year of revenue: he basically ends up comparing a year of revenue to two years of Capex. So now the truck needs to pay for itself in six months.
They way you'd need to think about this is to for example consider what the return on the capital goods bought in 2024 was in 2025. But that's not what's happening here. Instead the article is basically expecting a GPU that's to be paid for and installed in late 2025 to produce revenue in early 2025. That's not going to happen. In a stable state, this would not matter so much. But this is not a stable state. Both capex and revenue are growing rapidly, and revenue will lag behind.
What about the capex being inflated and the revenue being low-balled?
None of us really know for sure how much of the capex spending is on things one might call AI. But the pre-AI capex baseline of these companies was tens of billions each. Probably some non-AI projects no longer happen so that the companies can plow more money into AI capex, but it absolutely won't be all of it like the article assumes. As another example, why in the world is Tesla being included in the capex numbers? It's just blatant and desperate padding of the numbers.
As for the revenue, this is mostly analyst estimates rather than hard data (with the exception of Microsoft, though Zitron is misrepresenting the meaning of run rate). Given what he has to say about analysts elsewhere, seems odd to trust them here. But more importantly, they are analyst estimates of a subset of the revenue that GPUs/TPUs would produce. What happens when Amazon buys a GPU? Some of those GPUs will be used internally. Some of them will be used to provide genai API services. Some might be used to provide end-user AI proucts. And some of them will be rented out as GPUs. Only the two middle ones would be considered AI revenue.
I don't know what the fair and comparable numbers would be, am not aware of a trustworthy public source, and won't even try to guess at them. But when we don't know what the real numbers are, the one thing we should not do is use obviously invalid ones and present them as facts.
> I am only writing with this aggressive tone because, for the best part of two years,
Zitron's entire griftluencer schtick has always been writing aggressive and often obscenity-laden diatribes. Anyway, please don't forget to subscribe for just $7/month, and remember that he just loves to write and has no motive for clickbait or stirring up some outrage.
Apocryphon|7 months ago
He appears to only be doing that for the seven companies cumulatively, and in each company's case is only comparing year with year.
> Both capex and revenue are growing rapidly, and revenue will lag behind.
Even if his capex estimates are inflated, unless they're off by magntitudes, isn't the ratio between the two figures still alarming? What was, say, Amazon's initial capex for AWS compared to the revenue? Or in any other cases where long-term investment bore fruit?
> What happens when Amazon buys a GPU?
What else are they using GPUs for? Luna cloud gaming? Crypto mining?
davidclark|7 months ago
I’m not accusing you of anything, just giving the feedback that this line makes your post sound like it is AI slop. This is an extremely typical phrase when you prompt any current AI with some variation of “explain this post”. Honestly, the verbosity of the rest of your post also reinforces this signal. The typo here also indicates cutting and pasting things together “Given what he has to say about . But more importantly,”
If it is not AI slop, then hopefully you can use this feedback for future writing.
camillomiller|7 months ago
bgwalter|7 months ago
https://www.wsj.com/tech/ai/softbank-openai-a3dc57b4
tomjuggler|7 months ago
billy99k|7 months ago
nerevarthelame|7 months ago
[0] https://storage.googleapis.com/gweb-research2023-media/pubto...
[1] https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...
pestatije|7 months ago
tim333|7 months ago
louwrentius|7 months ago
xela79|7 months ago
Ethical approach? hell no. What do you expect from an unregulated capitalistic system.
zild3d|7 months ago
Competition, fortunately
andrewstuart|7 months ago
AI/LLMs are an infant technology, it’s at the beginning.
It took many many years until people figured out how to use the internet for more than just copying corporate brochures into HTML.
I put it to you that the truly valuable applications of AI/LLMs are yet to be invented and will be truly surprising when they come (which they must of course otherwise we’d invent them now).
Amdahl says we tend to overestimate the value of a new technology in the short term and underestimate it in the long term. We’re in the overestimate phase right now.
So I’d say ignore the noise about AI/LLMs now - the deep innovations are coming.
FranzFerdiNaN|7 months ago
It was immediately clear for many people how it could be used to express themselves. It took a lot of years to figure out how to kill most of those parts and turn the remainder into a corporate hellscape thats barely more than corporate brochures.
miltonlost|7 months ago
andrewstuart|7 months ago
Corrado|7 months ago
I think that Apple will hold on to their "AI" stuff for a while longer and wait until it really dies down. Then they will introduce a much better Siri and get rid of the "summarize your email" and "re-write this sentence" bullshit.
tim333|7 months ago
Says the PR guy who discovered AI a couple of years ago and now knows it all and that all the AI experts are wrong.
I mean it's a good rant but I don't think he gets the bigger picture.
ch_fr|7 months ago
Quoting the end of the article ad verbatim:
> And remember that you, as a regular person, can understand all of this. These people want you to believe this is black magic, that you are wrong to worry about the billions wasted or question the usefulness of these tools.
You are currently being "these people".
You don't need a huge technical baggage to understand that OpenAI still operates at a loss, and that there are at the very least some risks to consider before trying to rebuild all of society on it.
I've seen many people on HN (or maybe it was also you the other times) give this same reply again and again, "what do you know? You've not made your research, and if you made research, you don't have reliable sources, and if you have reliable sources, you're not seeing the bigger picture, and if you are seeing the bigger picture, you're not a tech guy, so what do you know?"
This essentially comes back to what the article also says, you are somehow held to crazy fucking standards if you ever say anything remotely critical, and then people will come up in HN threads and say "the human brain is basically also autocomplete, so genAI will be as good as the human brain soon™" (hey, according to your reply, shouldn't people be experts in the human brain to be able to post stuff like this?)
adverbly|7 months ago
jcgrillo|7 months ago
Has this effect been demonstrated by any company yet? AFAIK it has not, but I could be wrong. This seems like a rather large "what if"
micahel00|7 months ago
[deleted]
louwrentius|7 months ago
Military contracts.
I hope people understand the irony, but to spell it out: they need to live on government money to sustain growth.
Corporate welfare while 60% of the USA population doesn't have the money to cover a 1000$ emergency.
andsoitis|7 months ago
Meta makes 99% of its revenue from advertising (according to the article). Google, similarly, makes most of its money from advertising.
Tesla makes money by selling cars (there's no indication the government is going to transform their fleets to Tesla vehicles; in fact, they're openly hostile to EVs).
Apply needs to rely on US government military contracts for continued growth? What?
Amazon, the company that sells toothpaste and cloud services needs to rely on US government military contracts?
Consider me not convinced by the story you tell.