top | item 46959679

America's $1T AI Gamble

58 points| m-hodges | 19 days ago |apricitas.io

82 comments

order

mg|19 days ago

My napkin-math approach to get a bird's eye perspective on the situation:

A $1T investment needs to produce on the order of $100B in yearly earnings to be a good investment.

Global GDP is about $100T.

So one way for things to work out for the AI companies would be if AI raises GDP by 1% and the AI companies capture 10% of the created value.

louiereederson|19 days ago

At some point AI may deliver the level of net economic benefit you reference, but it's not entirely clear that we're there yet.

Right now much of the direct monetization occurs via OpenAI and Anthropic, who together have around $30B in annualized revenue. They are burning cash like crazy, though admittedly have potentially sustainable unit economics (gross margins around 40-60% before revenue share).

However, they need to spend a huge chunk of revenue on training. OpenAI spent something like $9b on training against around $13-14b in rev in 2025 (different from annualized rev) according to The Information. Anthropic's mix is supposed to be similar. Also implies a lot (maybe majority) of their compute spend is training.

If scaling laws falter, what happens to training spending? What happens to competitive degree of differentiation given Chinese open source models are a few months behind frontier? Then what happens to margins? It is very fragile.

nradov|19 days ago

That reminds me of "Chinese marketing" strategy by a lot of Western companies 30 years ago when their economy first opened up. There are billion people in China so if we can capture just 1% market share there then we'll make a fortune, right? Spoiler alert: it (mostly) didn't work.

sottol|19 days ago

If I'm mistaken, then the article states that the investment is $1T annualized when taking software development costs into account [1] if the labs don't all suddenly decide to stop development.

That would mean earnings of ~ $1.1T would be required on that investment annually, so maybe on $2T of revenue, capturing 2% of the global GDP - so I'd estimate that GDP would need to go up more like 5-10% to justify this.

[1] https://substackcdn.com/image/fetch/$s_!Gf2t!,f_auto,q_auto:...

bryanlarsen|19 days ago

10% capture seems highly unlikely. That level of capture is only possible for b2b high touch sales, aka "call-me" pricing.

For call-me pricing to work, you have to ensure that any sort of public sticker price is not a suitable alternative. You can not have a sticker price, make the sticker price so high essentially nobody will buy it or by finding a feature like oauth that makes the public version infeasible for businesses.

And then you also have to maintain enough of a monopoly / oligarchy to sustain that level of pricing.

I don't think either of those two conditions will apply in the future.

AI providers now have a sticker price that provides basically all functionality, almost completely eliminating the opportunity for extremely high-margin b2b. They've decided a small slice of a large pie is bigger than large piece of a smaller pie. I suspect that's true and will continue to be true in the future.

An oligarchy is difficult to sustain with more than 3 global players. Right now we seem to have 3 frontier models for coding that can and will charge more than commodity prices. However there are open source non-frontier models that you can use for inference costs only and even if those don't keep up it seems likely there will be enough non-frontier models available that their pricing will also be at the commodity level. Those cheaper models will provide significant downward pressure on frontier pricing.

vessenes|19 days ago

This is a good analyst report - lots of data. Conclusion - firms are spending ahead of sustained revenues right now, and a lot of the money is going offshore to TSMC, basically.

I’m not certain of the conclusion - I think a lot depends on amortization schedules - if data centers are fully booked right now, then we don’t need very long amortization schedules at the reported 60+% margin on inference to see this capex fully paid off.

My prior is that we are seeing something like 1/10,000th or so of the reasonable inference demand the world has fulfilled. There’s a note in the analysis that might back this - it says that we are seeing one of the only times ever where hardware prices are rising over time. Combined with spot prices at lambda labs (still quite high I’d say), it doesn’t look like we’re seeing a drop in inference demand.

Under those circumstances, the first phases of this bet, cross-industry, look like they will pay off. If that’s true, as an investment strategy, I’d just buy the basket - oAI, Anthropic, GOOG, META, SpaceX, MSFT, probably even Oracle, and wait. We’ll either get the rotating state of the art frontier capacity we’ve gotten in the last 18 months, or one of those will have lift off.

Of those, I think MSFT is the value play - they’re down something like 20% in the last six months? Satya’s strategy seems very sensible to me - slow hyperscale buildouts in the US (lots of competition) and do them everywhere else in the world (still not much competition). For countries that can’t build their own frontier models, the next best thing is going to be running them in local datacenters; MSFT has long standing operational bases everywhere in the world, it’s arguably one of their differentiators compared to GOOG/META.

scrollop|19 days ago

If a different architecture to LLMs is invented (that could actually "think", that could potentially reach AGI), then perhaps it would be more efficient than LLMs. Perhaps LLMs can make themselves more efficient. They can't even remember "properly". Hallucinations cripple them for serious, professional uses. If they may hallucinate 5% of the time and you are asking mission critical queries, that's a problem.

Perhaps all of these data centers won't be needed. At least not by some of the current AI companies that won't keep up. If that happens to OpenAI, that would be quite a shock to the financial system (and GDP).

Microsoft's changes to windows have alienated some of their userbase. Copilot is poor compared to it's rivals. There's a reason they are down 20%. Linux adoption use is accelerating (still too low!).

And don't forget AI on device. When it becomes "good enough" for most tasks, data centre use will reduce.

With the talk of Nvidia backtracking and saying they won't invest 100 billion in OpenAI, Oracle in a poor financial position with the loan's for it's upcoming data centres becoming more expensive and dubious (they could fail to pay them)- the picture isn't as positive as you make it out to be. Which makes me think that you have an ulterior motive.

throwaway1114|18 days ago

At this year Davos it was said in open text: big LLM supplieers & labs don't have enough demand to have profit that will cover spending. They must compete and provide supply because who win race will get most of the rewards. Geopolitics makes this even worse. So they are over leveraged and now time is running out before house of cards will collapse. All the hype and paid marketing is target to make masses (and more important people who make decisions) believe their story and buy into it. I assume for a person without technical background it's hard to filter signal from noise. It's all about that is hard to see that LLMs are what they are because "they" make great illusion of intelligence and illusion of novel thinking when it fact they just do the math with big text database.

WarmWash|19 days ago

AI plans are not going to stay at $20/mo.

People will go to alternative models, but it likely will be as popular as Linux.

pyrophane|19 days ago

Yeah, this is something I am thinking a lot about. Companies won't be able to sustain this level of spending forever, and one of two things will need to happen:

1. Models become commodities and immensely cheaper to operate for inference as a result of some future innovation. This would presumably be very bad for the handful of companies who have invested that $1T and want to recoup that, but great for those of us who love cheap inference.

2. #1 doesn't happen and the model providers start begin to feel empowered to pass the true cost of training + inference down to the model consumer. We start paying thousands of dollars per month for model usage and the price gate blocks out most people from reaping the benefits of bleeding-edge AI, instead being locked into cheaper models that are just there to extract cash by selling them things.

Personally I'm leaning toward #1. Future models near as good as the absolute best will get far cheaper to train, and new techniques and specialized inference chips will make them much cheaper to use. It isn't hard for me to imagine another Deepseek moment in the not-so-distant future. Perhaps Anthropic is thinking the same thing given the rumors that they are rumored to be pushing toward an IPO as early as this year.

sambull|19 days ago

That why they need widen the moat; it appears not giving us access to hardware might be that moat.

They desperately need LLMs to stay rentier and hardware advances are a direct attack on their model.

general1465|19 days ago

Economics will be decisive force. Paying 1000USD a month for AI or buying server for 10kUSD, loading there Chinese AI model which can do 90% of what SOTA models can? Looks like a no brainer.

wslh|19 days ago

Possibly, but that assumes continuity. New math and algorithmic breakthroughs could make much of today’s AI stack legacy, reshuffling both costs and winners.

scrollop|19 days ago

Yeah, they'll be free - on device and "good enough".

If you want the best, then pay.

co_king_3|19 days ago

I don't know about you, but I benefit so much from using Claude at work that I would gladly pay $1,500-$2,000 per month to keep using it.

fastball|19 days ago

A significant part of the capex is just energy, so even if there is some sort of AI black swan event and the data centers become obsolete overnight (unlikely), energy is literally the root of all bounty so it is good that something is incentivizing increased resource allocation in that area.

xyst|19 days ago

And this gamble is paid for by American taxpayers, increased cost of utilities, and multibillion dollar corporations receiving tax breaks/subsidies from the cities/counties they build in.

This country is so awful. Great if you are rich. Awful if you are not in this top 0.01-1%.

A massive $79T has been transferred from bottom 90% to top 1% since the 1970s. [1]

[1] https://www.rand.org/pubs/working_papers/WRA516-2.html

jryan49|19 days ago

I love how they say with a straight face that when AI takes over they will finally share all the fruits of capital with us.

coffeemug|19 days ago

To be intellectually honest about it, you have to answer a bunch of questions:

1. Awful compared to what? 2. Was there an equivalent transfer outside America? 3. What is the cause? What ratio rent-seeking/shady activity vs a consequence of natural forces (e.g. technological change)

throwmeaway820|19 days ago

> A massive $79T has been transferred from bottom 90% to top 1% since the 1970s

This assertion is based on comparing reality with a counterfactual where income distributions remained static from 1975 to the present. Real median personal income roughly doubled over this time period.

The use of the word "transferred" seems a little intellectually dishonest here. The use of the counterfactual seems to suggest that income distribution has no relationship with growth in total income, and total income would have been exactly the same regardless of income distribution. I see no reason to assume that to be the case.

BloondAndDoom|19 days ago

If it’s any consolation, I’m rich yet the country is still shit. (Comparing to Europe as a previous immigrant of Western Europe.

tim333|19 days ago

There's some of that but the vast majority is paid with private sector stuff - business profits and investor money.

francisofascii|19 days ago

Not to mention all the land being gobbled up to build these data centers.