I believe Google has earned the most revenue of any business ever [1]
So if the idea is to unseat Google, and make LLMs that are monetized by ads -- well that would be a lot of revenue!
The problem is obviously that Google knows this, and they made huge investments in AI before anyone else
---
I guess someone wants to do to Google what Apple did to Microsoft in the mobile era -- take over the operating system that matters by building something new (mobile), not by directly trying to unseat Microsoft
The problem seems to be that no one has figured out what the network effect in LLMs is. Google has a few network effects, but the bidder / ad buyer network is very strong -- they can afford to pay out a bigger rev share than anybody else
Google also had very few competitors early on -- Yahoo was the most credible competitor for a long time. And employees didn't leave to start competitors. Whereas OpenAI has splintered into 5 or more companies, fairly early in its life
[1] at least according to the Acquired podcast, which is reputable
> I believe Google has earned the most revenue of any business ever [1]
By yearly revenue, the highest revenue company is Walmart, followed by Amazon, which make somewhere near twice the revenue of Alphabet (around 11th place, per https://en.wikipedia.org/wiki/List_of_largest_companies_by_r...). Especially if you account the inflation, the total lifetime revenues of the major oil companies will easily dwarf Google.
Google is nowhere close to earning the most revenue of any business ever.
The problem seems more to be that every last one of these companies are burning through cash at an astonishing rate. No one, least of all Google, is making a profit from AI. They keep dangling AGI in front of investors even though no one can really define what it is.
Companies like Uber and Amazon operated at a loss, true. But they had an actual product. And they didn't come close to the money Google, Meta, OpenAI and Microsoft are losing.
> I believe Google has earned the most revenue of any business ever
As people pointed, this is wrong.
But anyway, Google's revenue last year was enough to satisfy the smallest point of the interval the article points out. And barely so.
So if everything goes perfectly for the next 5 years capital-wise, and AI manages to capture Google's revenue, at the most optimist conditions, they will be able to break even with depreciation.
Honestly, that is better than what I was expecting. But it completely different from the picture you will see in any media.
>The problem seems to be that no one has figured out what the network effect in LLMs is.
At the very least, the exact same network effects with respect to advertising that search has. The vast majority of frequent ChatGPT users I know mostly use it like a search engine.
That said, those network effects will be massive. Ads in LLMs are going to be unprecedentedly lucrative, irrespective of the platform. Google/Meta currently charge so much for ads because they have such enormous proprietary profiles on users based on their search/communication history that they can offer advertisers the ability to target users with extraordinary granularity. But at the end of the day, the ad itself is static and obviously an ad. LLMs will make these ads dynamic and insidious, subtly injected into chats in the way a real-life conversation might happen to discuss products. LLMs will become the ultimate word-of-mouth advertisers, the final form of astroturfing.
I think the fibre optic analogy is a bad one. The key reason supply massively outstripped demand was that optical equipment massively improved in efficiency.
We are not seeing that (currently) with GPUs. Perf/watt has basically completely stalled out recently while tokens per user has easily increased in many use cases has went up 100x+ (take Claude code usage vs normal chat usage). It's very very unlikely we will get breakthroughs in compute efficiency in the same way we did in the late 90s/2000s for fiber optic capacity.
Secondly, I'm not convinced the capex has increased that much. From some brief research the major tech firms (hyperscalers + meta) were spending something like $10-15bn a month in capex in 2019. Now if we assume that spend has all been rebadged AI, and adjust for inflation it's a big ramp but not quite as big as it seems, especially when you consider construction inflation has been horrendous virtually everywhere post covid.
What I really think is going on is some sort of prisoners dilemma with capex. If you don't build then you are at serious risk of shortages assuming demand does continue in even the short and medium term. This then potentially means you start churning major non AI workloads along with the AI work from eg AWS. So everyone is booking up all the capacity they can get, and let's keep in mind a small fraction of these giant trillion dollar numbers being thrown around from especially OpenAI are actually hard commitments.
To be honest if it wasn't for Claude code I would be extremely skeptical of the demand story but given I now get through millions of tokens a day, if even a small percentage of knowledge workers globally adopt similar tooling it's sort of a given we are in for a very large shortage of compute. I'm sure there will be various market corrections along the way, but I do think we are going to require a shedload more data centres.
> We are not seeing that (currently) with GPUs. Perf/watt has basically completely stalled out recently while tokens per user has easily increased in many use cases has went up 100x+ (take Claude code usage vs normal chat usage). It's very very unlikely we will get breakthroughs in compute efficiency in the same way we did in the late 90s/2000s for fiber optic capacity.
At least for gaming, GPU performance per dollar has gotten a lot better in the last decade. It hasn't gotten much better in the past couple of years specifically, but I assume a lot of that is due to the increased demand for AI use driving up the price for consumers.
Yeah, but the question is whether your demand for Claude Code would be as high as it is, if Anthropic were charging enough to cover their costs. Not this fake "the model is profitable if you ignore training the next model" stuff but enough for them to actually be profitable today.
This is a crucial question that often gets overlooked in the AI hype cycle. The article makes a great point about the disconnect between infrastructure investment and actual revenue generation.
A few thoughts:
1. The comparison to previous tech bubbles is apt - we're seeing massive capex without clear paths to profitability for many use cases.
2. The "build it and they will come" mentality might work for foundational models, but the application layer needs more concrete business cases.
3. Enterprise adoption is happening, but at a much slower pace than the investment would suggest. Most companies are still in pilot phases.
4. The real value might come from productivity gains rather than direct revenue - harder to measure but potentially more impactful long-term.
What's your take on which AI applications will actually generate enough value to justify the current spending levels?
There are two main threads I keep going back to when thinking about long term AI and why so many investors/statespeople are all in:
1) the labor angle: it’s been stated plainly by many execs that the goal is to replace double percent digits of their workforce with AI of some sort. Human wages being what they are, the savings there are meaningful and seemingly worth the gamble.
2) the military angle: the future of warfare seems to be autonomous weapons/vehicles of all sorts. Given the winner takes all nature of warfare, any edge you can get there is worth it. If not investing enough in AI means the US gets steamrolled by China in the Pacific (and other countries getting steamrolled by whomever China wants to sell/lend its tech to), then it seems to justify most any investment, no matter how ridiculous the current returns seem.
First of all, warfare is not winner take all. That's a sort of video game naive conception. The famous quote "War is politics by other means" is much more accurate.
When armed conflicts happen, it's because the belligerents have specific objectives, and very rarely is that objective "the total obliteration of the enemy" vs something more specific and concrete like territory, access to natural resources, the creation of a vassal state that can be exploited, or sometimes purely ideological (nationalist notions growing into the idea that a people are entitled to an empire).
Anyhow, the point is warfare is not a winner takes all game of obliteration.
But also the idea that the future of warfare will be all autonomous weapons is massively overweighting on drone hype, and ignoring that a lot of the fundamentals haven't changed since the days of Bismark, despite the rise of drones, computer vision algorithms, etc.
A simple example is Ukraine, where the battlefield is essentially defined by the combination of traditional artillery, mines and similar fortifications, and simple observation drones that don't have any particularly complex AI. The combination creates a 20 km "no go zone" that has nothing really to do with autonomy.
In fact, the more AI centric loitering munitions provided by US/EU firms have performed quite poorly in Ukraine, which is why they're favoring much more simple implementations like using hobby FPV drone components, or remote piloting via GSM modems, etc.
Will these technologies play an increasing role in future conflicts? Of course. But they're not going to completely upend things, or obliviate more traditional platforms.
Heck, another example would be simple hand coded AIs have been better than humans in dogfights for decades now. And it matters exactly zero for real world conflicts, because what fighter pilots actually do isn't a Top Gun movie.
Warfare isn’t really a winner takes all affair. Unless you absolutely crush your enemy most warfare ends in a stalemate of one form or another with the victor getting an advantage over the looser. In many cases medium tech advantages can be countered either with better logistics, willingness to trade losses or quality of weapons.
On 1, the railroads had a better point at that than the AI companies. They did allow dispersed industry to integrate, did multiply their countries GDP by a sizeable amount, and went bankrupt anyway.
If those companies replace a low 2-digits percentages of the developers, and capture their entire salary, it's still not enough to reach the depreciation numbers on the article.
On 2, that could justify it... Except that we are talking about fucking LLMs. What do anybody expect LLMs to do in a war that will completely obliterate some country?
I think the ai angle for warfare is overhyped. Most of the autonomous drone stuff happening in Ukraine is not running on bleeding edge nodes. It's radxa sbcs with process nodes from 10 years ago.
> it’s been stated plainly by many execs that the goal is to replace double percent digits of their workforce with AI of some sort
Even if we grant that this is possible, have any of these execs actually thought through what happens when their competitors also replace large chunks of their workforce with AI and then begin undercutting them on price? The idea that "our prices will stay exactly the same, but our salary costs will go to zero and become pure profit instead!" is delusional even if AI can actually replace large numbers of people, which itself is quite doubtful.
> As you can imagine, when you’re the vendor, the customer and the investor in a company, there’s a strong incentive to artificially inflate the numbers by signing preferable contracts that use very large numbers, and then round-trip the capital.
The comparison to railroad infrastructure is interesting.
I think the author is wrong on this point however:
> Today’s tech just cannot do what will be required of it (AI shouldn’t be dispensing medication when it can’t even count to 7).
The failures of AI are thought provoking, and more so when considered together with other results where AI performs at near expert level on challenging benchmarks. However, perfect reasoning is hardly a requirement. Most humans are not particularly good at reasoning, and most jobs do not need it. Both humans and AI can use calculators and other tools. All that's needed is that the AI is more or less as good as a human, while requiring much less pay.
A good exercise to appreciate the current state of AI might be to ask AI to write an essay about this topic ("how much revenue is needed to justify current AI spend, and draw parallels to the dotcom boom and building the transcontinental railroad"). Try it with two different models, using the deep research mode. I expect the results would be humbling.
...
So, in summary: We likely need on the order of hundreds of billions to low-trillions of dollars annually in AI revenue to justify the present level of infrastructure and model investment. Current realized revenues are many orders of magnitude below that.
But that’s the cold math. History suggests that such math often overlooks strategic externalities, spillover effects, hype, and speculative capital flows.
> This is one of those rather surreal situations where everyone senior in this ecosystem knows that the math doesn’t work, but they don’t know that everyone else also knows this. They thought that they were the foolish ones, who simply didn’t get it.
I don’t know if it’s that surreal or unexpected. There’s a reason “The Emperors Clothes” is such a classic, enduring, fable. It’s happened before. It’ll happen again.
Not shading the article. All good points, just was surprised the author threw this bit in.
Railroads and fibre are better examples. Tulips are actually fucking useless as a productive asset. Railroads, fibre-optic cables, power production and datacentres are not.
In current system if you do not do actual straight up criminal fraud you get charged for you get to keep all the money you got on the way. So even if math never makes sense there is money to be earned for the time being. And then when it fails, well there is always the next scheme. And next round of people who believe they can extract their share on the way.
“Railway mania” in the UK in the 1840s and 1860s “involved capital investments of 15% to 20% of GDP”. US GDP is around $30 trillion. [0]
Between 1900 and 1929 the total capital invested in rail stocks and bonds in the US grew from $10.8bn (against GDP of $21.2bn) to $21.4bn [1] (1929 GDP was $104.6 billion). [2]
So the current AI capex doesn’t really seem too far fetched by comparison.
Apple’s revenue went from $13.9bn in 2005 to $391bn in 2024.
Google’s revenue went from $6.1bn to $348bn over the same period.
Microsoft’s revenue was $197.5m in 1986, $345.9m in 1987, $591m in 1988, $804.5m in 1989, $1183m in 1990, $1843m in 1991, $14.5bn by 1998 and $19.75bn in 1999.
So revenue 10x’d from ‘86 to ‘91 and again from ‘91 to ‘99.
Those are all nominal unadjusted figures, but all three companies are arguably “category defining”.
For OpenAI to go from public launch to what looks like ~$12 billion projected annualised revenue so quickly says quite a lot. If it only follows the Microsoft trajectory that would be $120bn by 2030.
But going back to the railways point: the impact of railways wasn’t measured in the revenue of just one company, but rather across the whole physical supply chain.
If you assume that AI could be as transformative to the digital supply chain (and everything it touches) then you could argue that investments of 20% of global GDP wouldn’t be crazy.
> We are entering an era where computing capital, intellectual capital, and military capital will dominate
These are bullshit terms. Capital is capital. Military production, IP production and yes, AIs running in datacentres and on the grid, are all subject to economic forces. (Folks argued railroads were a different form of capital in the 19th century, too. And fibre optics. And tulips. And dot-com companies. And computer-assembled American mortgage instruments.)
We might be investing for a golden future. We might be the Soviet Union baited into unsustainable spending commitments. The answer to these questions isn't in pretending this time is different, or that economics can be suspended when it comes to certain questions of production and return.
On one of the latest Odd lots episodes finally an analyst had an investment thesis that made sense to me:
They think they are building an AI god.
If you think of it in religious terms it suddenly makes sense. Expected rate of return? One scenario has has infinite expected return (some kind of pascals wager/mugging)!
Of course there will be no AGI. Just a planet we'll have to live on where those deluded idiots wasted our resources on some boondoggle. Maybe this kind of concentration of power is a bad thing? I think we are going to get to those kind of questions once the party is over.
One of the things I’d love to know is if you agree with this view what is an average person to do? You can always short the whole market but that’s crazy risky and as the article says this could go on for some time.
Gold miners like THXPF, MNSAF, KGC are fantastically priced, particularly with high inflation for the next few years. As a cyclical play, one would exit in 3-7 years (when everyone starts talking about it). Chinese stocks e.g. through KWEB are also lovely. But it's hard to suggest without first suggesting various accounting books.
1) 24/7 available super-competent personal assistant has tremendous value for any person who does something interesting or difficult and values personal time
2) llm-provided tax/legal advice alone was worth hundreds of euros for me
3) there is no easy way to capture all this value. I still think edge intelligence/autonomy has a way to pay for the rest of the party, because those will be physical things. People eagerly part with money for things. Governments and enterprises will maybe pay for cloud services, if the price will be right
"the industry is spending over $30 billion a month (approximately $400 billion for 2025) and only receiving a bit more than a billion a month back in revenue."
That's called a "bubble". Obviously, this time it is different until it isn't.
I own several books of trig and other tables, three slide rules and a couple of calculators, a working Commodore 64 and an IT consultancy company.
We are fiddling with LLMs as yet another tool. We are getting some great results but not earth shattering.
Tulips are very pretty flowers. I have several dozen in my garden. I have some plants that are way more valuable than tulips in my garden too.
Oh dear. I didn't even mention my crocosmia stash by name and I pissed someone off. They are jolly expensive to buy and I have a good 30' by 5' bed stuffed full of them. A crocosmia plant in a pot costs about £5-20. I've got loads of them. You shoudl see my acer (maple) collection. Mind blown.
Oh, AI.
It is artificial but it is not intelligent. A LLM (int al) is a marvelous thing. I find sheer joy in conversing with a "gpt-oss-20b F16" that runs on a £600 GPU and a slack handful of CPU and RAM because so little gives so much.
It's an interesting thought experiment, but not sure it's the entire story.
Imagine at the start of the electrification era people went "We'd need to build loads of cables and power plants and stuff that's expensive, lets just stick to steam power".
It's not a bet on this making sense via pedestrian business economics but rather that it'll be a game changer.
...whether that pans out is a technological and societal question, not an economic one in my mind
> Imagine at the start of the electrification era people went "We'd need to build loads of cables and power plants and stuff that's expensive, lets just stick to steam power"
False dichotomy. There are literally infinite options between ignoring AI and spending a quarter of a trillion on it annually.
The people investing in AI companies (and the big players spending in AI) are seeking Artificial General Intelligence (AGI). It's the only way they get a return on their capital.
They are investing so they can get there first. Money basically becomes meaningless at that point, whoever owns the AGI owns the world. That's the only way to get a return on that investment.
Or the AGI owns its owners and the rest of the world; getting it to respect its owner's wishes remains an unsolved problem which many people still seem to think isn't worth even spending time figuring out at this point.
> people investing in AI companies (and the big players spending in AI) are seeking Artificial General Intelligence (AGI). It's the only way they get a return on their capital
> the industry is spending over $30 billion a month (approximately $400 billion for 2025) and only receiving a bit more than a billion a month back in revenue.
I suspect that this revenue number is a vast underestimation, even today, ignoring the reality of untapped revenue streams like ChatGPT's 800M advertising eyeballs.
1. Google has stated that Gemini is processing 1.3 quadrillion tokens per month. Its hard to convert this into raw revenue; its spread across different models, much of it is likely internal usage, or usage more tied to a workspace subscription rather than per-token API billing. But to give a sense of this scale, this is what that annualized revenue looks like priced at per-token API pricing for their different models, assuming a 50/50 input/output: Gemini 2.5 Flash Lite: ~$9B/year, Gemini 2.5 Flash: ~$22.8B/year, Gemini 2.5 Pro: ~$110B/year.
2. ChatGPT has 800M weekly active users. If 10% of these users are on the paid plan, this is $19.2B/year. Adjust this value depending on what percentage of users you believe pay for ChatGPT. Sam has announced that they're processing 6B API tokens per minute, which, again depending on the model, puts their annualized API revenue between $1B-$31B.
3. Anthropic has directly stated that their annualized revenue, as of August, was $5B [2]. Given their growth, and the success of Claude 4.5, its likely this number is more around $6B-$7B right now.
So, just with these three companies, which are the three biggest involved in infrastructure rollouts, we're likely somewhere in the realm of ~$30B/year? Very fuzzy and hard, but at the very least I think its weird to guess that the number is closer to like $12B. Its possible the article is basing its estimates on numbers from earlier in 2025, but to be frank: If you're not refreshing your knowledge on this stuff every week, you're out of date. Its moving so fast.
> 2. ChatGPT has 800M weekly active users. If 10% of these users are on the paid plan, this is $19.2B/year. Adjust this value depending on what percentage of users you believe pay for ChatGPT. Sam has announced that they're processing 6B API tokens per minute, which, again depending on the model, puts their annualized API revenue between $1B-$31B.
OpenAI announced a few months ago that it had finally cracked $1B in monthly revenue (intriguingly, it did so twice, which makes me wonder how much fibbing there is in these statements).
I'll also say this: the fact that AI companies prefer to tout their usage numbers rather than their revenue numbers is a sign that their revenue numbers isn't stellar (especially given that several of the Big Tech companies have stopped reporting AI revenue as separate call-outs).
I think you're underestimating how quickly users can move to another platform if something better / cheaper shows up unless there are user network effects that benefit / keep people on a platform. We've lived through several of these - yahoo/lycos to Google. A bunch of terrible providers to GMail, various messengers to Apple/WhatsApp/line dominating countries etc. This space seems ripe for the second mover advantage effect
> even today, ignoring the reality of untapped revenue streams like ChatGPT's 800M advertising eyeballs.
Respectfully, the idea of sticking ads in LLMs is just copium. It's never going to work.
LLMs' unfixable inclination for hallucinations makes this an infinite lawsuit machine. Either the regulators will tear OpenAI to shreds over it, or the advertisers seeing their trademarks hijacked by scammers will do it in their stead. LLMs just cannot be controlled enough for this idea to make sense, even with RAG.
And if we step away from the idea of putting ads in the LLM response, we're left with "stick a banner ad on chatgpt dot com". The exact same scheme as the Dotcom Bubble. Worked real well that time, I hear. "Stick a banner ad on it" was a shit idea in 2000. It's not going to bail out AI in 2025.
The original content that LLMs paraphrase is itself struggling to support itself on ads. The idea that you can steal all those impressions through a service that is orders and orders of magnitude more expensive and somehow turn a profit on those very same ads is ludicrous.
I don't get why ads are never mentioned in the article. The current use cases of GenAI (chatbots, generate [art] for me,...) have extremely obvious monetization angles through ads, and then there's a positive chance that they can bring in revenue through more ways than that (they already do in the form of subscriptions, eg). It might be that the economics still don't work out, but at least it should be considered?
chubot|4 months ago
So if the idea is to unseat Google, and make LLMs that are monetized by ads -- well that would be a lot of revenue!
The problem is obviously that Google knows this, and they made huge investments in AI before anyone else
---
I guess someone wants to do to Google what Apple did to Microsoft in the mobile era -- take over the operating system that matters by building something new (mobile), not by directly trying to unseat Microsoft
The problem seems to be that no one has figured out what the network effect in LLMs is. Google has a few network effects, but the bidder / ad buyer network is very strong -- they can afford to pay out a bigger rev share than anybody else
Google also had very few competitors early on -- Yahoo was the most credible competitor for a long time. And employees didn't leave to start competitors. Whereas OpenAI has splintered into 5 or more companies, fairly early in its life
[1] at least according to the Acquired podcast, which is reputable
edit: oops, it was profit, not revenue
https://www.acquired.fm/episodes/google
Google with this business model makes more profits than any other company, ergo tautologically, is the most magical business model ever discovered.
jcranmer|4 months ago
By yearly revenue, the highest revenue company is Walmart, followed by Amazon, which make somewhere near twice the revenue of Alphabet (around 11th place, per https://en.wikipedia.org/wiki/List_of_largest_companies_by_r...). Especially if you account the inflation, the total lifetime revenues of the major oil companies will easily dwarf Google.
Google is nowhere close to earning the most revenue of any business ever.
jayveeone|4 months ago
Companies like Uber and Amazon operated at a loss, true. But they had an actual product. And they didn't come close to the money Google, Meta, OpenAI and Microsoft are losing.
marcosdumay|4 months ago
As people pointed, this is wrong.
But anyway, Google's revenue last year was enough to satisfy the smallest point of the interval the article points out. And barely so.
So if everything goes perfectly for the next 5 years capital-wise, and AI manages to capture Google's revenue, at the most optimist conditions, they will be able to break even with depreciation.
Honestly, that is better than what I was expecting. But it completely different from the picture you will see in any media.
bitmasher9|4 months ago
Google is very large, and I’m sure Acquired framed the statement is such a way that it’s true, but this statement as you presented it is false.
Other publicly traded companies have reported more lifetime revenue. Other product categories besides internet search have generated more revenue.
unknown|4 months ago
[deleted]
MontyCarloHall|4 months ago
At the very least, the exact same network effects with respect to advertising that search has. The vast majority of frequent ChatGPT users I know mostly use it like a search engine.
That said, those network effects will be massive. Ads in LLMs are going to be unprecedentedly lucrative, irrespective of the platform. Google/Meta currently charge so much for ads because they have such enormous proprietary profiles on users based on their search/communication history that they can offer advertisers the ability to target users with extraordinary granularity. But at the end of the day, the ad itself is static and obviously an ad. LLMs will make these ads dynamic and insidious, subtly injected into chats in the way a real-life conversation might happen to discuss products. LLMs will become the ultimate word-of-mouth advertisers, the final form of astroturfing.
martinald|4 months ago
We are not seeing that (currently) with GPUs. Perf/watt has basically completely stalled out recently while tokens per user has easily increased in many use cases has went up 100x+ (take Claude code usage vs normal chat usage). It's very very unlikely we will get breakthroughs in compute efficiency in the same way we did in the late 90s/2000s for fiber optic capacity.
Secondly, I'm not convinced the capex has increased that much. From some brief research the major tech firms (hyperscalers + meta) were spending something like $10-15bn a month in capex in 2019. Now if we assume that spend has all been rebadged AI, and adjust for inflation it's a big ramp but not quite as big as it seems, especially when you consider construction inflation has been horrendous virtually everywhere post covid.
What I really think is going on is some sort of prisoners dilemma with capex. If you don't build then you are at serious risk of shortages assuming demand does continue in even the short and medium term. This then potentially means you start churning major non AI workloads along with the AI work from eg AWS. So everyone is booking up all the capacity they can get, and let's keep in mind a small fraction of these giant trillion dollar numbers being thrown around from especially OpenAI are actually hard commitments.
To be honest if it wasn't for Claude code I would be extremely skeptical of the demand story but given I now get through millions of tokens a day, if even a small percentage of knowledge workers globally adopt similar tooling it's sort of a given we are in for a very large shortage of compute. I'm sure there will be various market corrections along the way, but I do think we are going to require a shedload more data centres.
Wowfunhappy|4 months ago
At least for gaming, GPU performance per dollar has gotten a lot better in the last decade. It hasn't gotten much better in the past couple of years specifically, but I assume a lot of that is due to the increased demand for AI use driving up the price for consumers.
Why wouldn't Moore's Law continue?
qcnguy|4 months ago
ThomasCloarec|4 months ago
A few thoughts:
1. The comparison to previous tech bubbles is apt - we're seeing massive capex without clear paths to profitability for many use cases.
2. The "build it and they will come" mentality might work for foundational models, but the application layer needs more concrete business cases.
3. Enterprise adoption is happening, but at a much slower pace than the investment would suggest. Most companies are still in pilot phases.
4. The real value might come from productivity gains rather than direct revenue - harder to measure but potentially more impactful long-term.
What's your take on which AI applications will actually generate enough value to justify the current spending levels?
gyomu|4 months ago
1) the labor angle: it’s been stated plainly by many execs that the goal is to replace double percent digits of their workforce with AI of some sort. Human wages being what they are, the savings there are meaningful and seemingly worth the gamble.
2) the military angle: the future of warfare seems to be autonomous weapons/vehicles of all sorts. Given the winner takes all nature of warfare, any edge you can get there is worth it. If not investing enough in AI means the US gets steamrolled by China in the Pacific (and other countries getting steamrolled by whomever China wants to sell/lend its tech to), then it seems to justify most any investment, no matter how ridiculous the current returns seem.
jasonwatkinspdx|4 months ago
First of all, warfare is not winner take all. That's a sort of video game naive conception. The famous quote "War is politics by other means" is much more accurate.
When armed conflicts happen, it's because the belligerents have specific objectives, and very rarely is that objective "the total obliteration of the enemy" vs something more specific and concrete like territory, access to natural resources, the creation of a vassal state that can be exploited, or sometimes purely ideological (nationalist notions growing into the idea that a people are entitled to an empire).
Anyhow, the point is warfare is not a winner takes all game of obliteration.
But also the idea that the future of warfare will be all autonomous weapons is massively overweighting on drone hype, and ignoring that a lot of the fundamentals haven't changed since the days of Bismark, despite the rise of drones, computer vision algorithms, etc.
A simple example is Ukraine, where the battlefield is essentially defined by the combination of traditional artillery, mines and similar fortifications, and simple observation drones that don't have any particularly complex AI. The combination creates a 20 km "no go zone" that has nothing really to do with autonomy.
In fact, the more AI centric loitering munitions provided by US/EU firms have performed quite poorly in Ukraine, which is why they're favoring much more simple implementations like using hobby FPV drone components, or remote piloting via GSM modems, etc.
Will these technologies play an increasing role in future conflicts? Of course. But they're not going to completely upend things, or obliviate more traditional platforms.
Heck, another example would be simple hand coded AIs have been better than humans in dogfights for decades now. And it matters exactly zero for real world conflicts, because what fighter pilots actually do isn't a Top Gun movie.
nothercastle|4 months ago
marcosdumay|4 months ago
If those companies replace a low 2-digits percentages of the developers, and capture their entire salary, it's still not enough to reach the depreciation numbers on the article.
On 2, that could justify it... Except that we are talking about fucking LLMs. What do anybody expect LLMs to do in a war that will completely obliterate some country?
aswanson|4 months ago
Analemma_|4 months ago
Even if we grant that this is possible, have any of these execs actually thought through what happens when their competitors also replace large chunks of their workforce with AI and then begin undercutting them on price? The idea that "our prices will stay exactly the same, but our salary costs will go to zero and become pure profit instead!" is delusional even if AI can actually replace large numbers of people, which itself is quite doubtful.
dgfitz|4 months ago
That about sums it up.
thinkzilla|4 months ago
I think the author is wrong on this point however: > Today’s tech just cannot do what will be required of it (AI shouldn’t be dispensing medication when it can’t even count to 7).
The failures of AI are thought provoking, and more so when considered together with other results where AI performs at near expert level on challenging benchmarks. However, perfect reasoning is hardly a requirement. Most humans are not particularly good at reasoning, and most jobs do not need it. Both humans and AI can use calculators and other tools. All that's needed is that the AI is more or less as good as a human, while requiring much less pay.
A good exercise to appreciate the current state of AI might be to ask AI to write an essay about this topic ("how much revenue is needed to justify current AI spend, and draw parallels to the dotcom boom and building the transcontinental railroad"). Try it with two different models, using the deep research mode. I expect the results would be humbling.
aworks|4 months ago
... So, in summary: We likely need on the order of hundreds of billions to low-trillions of dollars annually in AI revenue to justify the present level of infrastructure and model investment. Current realized revenues are many orders of magnitude below that.
But that’s the cold math. History suggests that such math often overlooks strategic externalities, spillover effects, hype, and speculative capital flows.
travisgriggs|4 months ago
I don’t know if it’s that surreal or unexpected. There’s a reason “The Emperors Clothes” is such a classic, enduring, fable. It’s happened before. It’ll happen again.
Not shading the article. All good points, just was surprised the author threw this bit in.
Buy more tulips.
JumpCrisscross|4 months ago
Railroads and fibre are better examples. Tulips are actually fucking useless as a productive asset. Railroads, fibre-optic cables, power production and datacentres are not.
Ekaros|4 months ago
edoceo|4 months ago
saaaaaam|4 months ago
Between 1900 and 1929 the total capital invested in rail stocks and bonds in the US grew from $10.8bn (against GDP of $21.2bn) to $21.4bn [1] (1929 GDP was $104.6 billion). [2]
So the current AI capex doesn’t really seem too far fetched by comparison.
Apple’s revenue went from $13.9bn in 2005 to $391bn in 2024.
Google’s revenue went from $6.1bn to $348bn over the same period.
Microsoft’s revenue was $197.5m in 1986, $345.9m in 1987, $591m in 1988, $804.5m in 1989, $1183m in 1990, $1843m in 1991, $14.5bn by 1998 and $19.75bn in 1999.
So revenue 10x’d from ‘86 to ‘91 and again from ‘91 to ‘99.
Those are all nominal unadjusted figures, but all three companies are arguably “category defining”.
For OpenAI to go from public launch to what looks like ~$12 billion projected annualised revenue so quickly says quite a lot. If it only follows the Microsoft trajectory that would be $120bn by 2030.
But going back to the railways point: the impact of railways wasn’t measured in the revenue of just one company, but rather across the whole physical supply chain.
If you assume that AI could be as transformative to the digital supply chain (and everything it touches) then you could argue that investments of 20% of global GDP wouldn’t be crazy.
(Though, railway mania, 1929 crash, etc…)
[0] https://www-users.cse.umn.edu/~odlyzko/doc/mania18.pdf
[1] https://www.researchgate.net/figure/Composition-of-capital-r...
[2] https://www.measuringworth.com/datasets/usgdp/result.php?use...
aeon_ai|4 months ago
We are in geopolitically fraught times. Money alone is not capital.
We have been living in an era where financial capital has dominated.
We are entering an era where computing capital, intellectual capital, and military capital will dominate.
The people in control of those when the game changes are the ones writing the rules.
JumpCrisscross|4 months ago
These are bullshit terms. Capital is capital. Military production, IP production and yes, AIs running in datacentres and on the grid, are all subject to economic forces. (Folks argued railroads were a different form of capital in the 19th century, too. And fibre optics. And tulips. And dot-com companies. And computer-assembled American mortgage instruments.)
We might be investing for a golden future. We might be the Soviet Union baited into unsustainable spending commitments. The answer to these questions isn't in pretending this time is different, or that economics can be suspended when it comes to certain questions of production and return.
nbngeorcjhe|4 months ago
yes, true, but it would probably be better to pour money into shipyards than data centers
tobias3|4 months ago
They think they are building an AI god.
If you think of it in religious terms it suddenly makes sense. Expected rate of return? One scenario has has infinite expected return (some kind of pascals wager/mugging)!
Of course there will be no AGI. Just a planet we'll have to live on where those deluded idiots wasted our resources on some boondoggle. Maybe this kind of concentration of power is a bad thing? I think we are going to get to those kind of questions once the party is over.
tim333|4 months ago
ChrisArchitect|4 months ago
AI data centers have an impossibly short runway to achieve profitability
https://news.ycombinator.com/item?id=45546305
roxolotl|4 months ago
veqq|4 months ago
hamilyon2|4 months ago
2) llm-provided tax/legal advice alone was worth hundreds of euros for me
3) there is no easy way to capture all this value. I still think edge intelligence/autonomy has a way to pay for the rest of the party, because those will be physical things. People eagerly part with money for things. Governments and enterprises will maybe pay for cloud services, if the price will be right
gerdesj|4 months ago
That's called a "bubble". Obviously, this time it is different until it isn't.
I own several books of trig and other tables, three slide rules and a couple of calculators, a working Commodore 64 and an IT consultancy company.
We are fiddling with LLMs as yet another tool. We are getting some great results but not earth shattering.
Tulips are very pretty flowers. I have several dozen in my garden. I have some plants that are way more valuable than tulips in my garden too.
gerdesj|4 months ago
Oh, AI.
It is artificial but it is not intelligent. A LLM (int al) is a marvelous thing. I find sheer joy in conversing with a "gpt-oss-20b F16" that runs on a £600 GPU and a slack handful of CPU and RAM because so little gives so much.
christophilus|4 months ago
cadamsdotcom|4 months ago
There might be a slump.
But today's AI companies (both those you've heard of and those you haven't) are here to stay.
It'll be okay, don't panic.
nickreese|4 months ago
tim333|4 months ago
Phase 1) Spend $400 bn / yr less a bit of income
Phase 2) ? - AGI aound 2030 - replace workforce or double output. Worth >1x world GDP or ~$100 tn ?
Phase 3) Profit!
(some discussion in "The case for AGI by 2030" https://80000hours.org/agi/guide/when-will-agi-arrive/)
Fund phase 1 by hyping it to investors
Havoc|4 months ago
Imagine at the start of the electrification era people went "We'd need to build loads of cables and power plants and stuff that's expensive, lets just stick to steam power".
It's not a bet on this making sense via pedestrian business economics but rather that it'll be a game changer.
...whether that pans out is a technological and societal question, not an economic one in my mind
JumpCrisscross|4 months ago
False dichotomy. There are literally infinite options between ignoring AI and spending a quarter of a trillion on it annually.
jedberg|4 months ago
The people investing in AI companies (and the big players spending in AI) are seeking Artificial General Intelligence (AGI). It's the only way they get a return on their capital.
They are investing so they can get there first. Money basically becomes meaningless at that point, whoever owns the AGI owns the world. That's the only way to get a return on that investment.
mitthrowaway2|4 months ago
JumpCrisscross|4 months ago
Then why fuck around with ads?
827a|4 months ago
I suspect that this revenue number is a vast underestimation, even today, ignoring the reality of untapped revenue streams like ChatGPT's 800M advertising eyeballs.
1. Google has stated that Gemini is processing 1.3 quadrillion tokens per month. Its hard to convert this into raw revenue; its spread across different models, much of it is likely internal usage, or usage more tied to a workspace subscription rather than per-token API billing. But to give a sense of this scale, this is what that annualized revenue looks like priced at per-token API pricing for their different models, assuming a 50/50 input/output: Gemini 2.5 Flash Lite: ~$9B/year, Gemini 2.5 Flash: ~$22.8B/year, Gemini 2.5 Pro: ~$110B/year.
2. ChatGPT has 800M weekly active users. If 10% of these users are on the paid plan, this is $19.2B/year. Adjust this value depending on what percentage of users you believe pay for ChatGPT. Sam has announced that they're processing 6B API tokens per minute, which, again depending on the model, puts their annualized API revenue between $1B-$31B.
3. Anthropic has directly stated that their annualized revenue, as of August, was $5B [2]. Given their growth, and the success of Claude 4.5, its likely this number is more around $6B-$7B right now.
So, just with these three companies, which are the three biggest involved in infrastructure rollouts, we're likely somewhere in the realm of ~$30B/year? Very fuzzy and hard, but at the very least I think its weird to guess that the number is closer to like $12B. Its possible the article is basing its estimates on numbers from earlier in 2025, but to be frank: If you're not refreshing your knowledge on this stuff every week, you're out of date. Its moving so fast.
[1] https://www.reddit.com/r/Bard/comments/1o3ex1v/gemini_is_pro...
[2] https://www.anthropic.com/news/anthropic-raises-series-f-at-...
jcranmer|4 months ago
OpenAI announced a few months ago that it had finally cracked $1B in monthly revenue (intriguingly, it did so twice, which makes me wonder how much fibbing there is in these statements).
I'll also say this: the fact that AI companies prefer to tout their usage numbers rather than their revenue numbers is a sign that their revenue numbers isn't stellar (especially given that several of the Big Tech companies have stopped reporting AI revenue as separate call-outs).
sfifs|4 months ago
iLoveOncall|4 months ago
I wouldn't believe it if you told me even 1% of those users are paying. 10% is simply ridiculous.
joshvm|4 months ago
N70Phone|4 months ago
Respectfully, the idea of sticking ads in LLMs is just copium. It's never going to work.
LLMs' unfixable inclination for hallucinations makes this an infinite lawsuit machine. Either the regulators will tear OpenAI to shreds over it, or the advertisers seeing their trademarks hijacked by scammers will do it in their stead. LLMs just cannot be controlled enough for this idea to make sense, even with RAG.
And if we step away from the idea of putting ads in the LLM response, we're left with "stick a banner ad on chatgpt dot com". The exact same scheme as the Dotcom Bubble. Worked real well that time, I hear. "Stick a banner ad on it" was a shit idea in 2000. It's not going to bail out AI in 2025.
The original content that LLMs paraphrase is itself struggling to support itself on ads. The idea that you can steal all those impressions through a service that is orders and orders of magnitude more expensive and somehow turn a profit on those very same ads is ludicrous.
t_mann|4 months ago
DonsDiscountGas|4 months ago
Also ads only make sense for the use case of direct chatting with a user, any type of automation (the big promise of AI) ads don't matter.