He writes as if only datacenters and network equipment remain after the AI bubble bursts. Like there won't be any AI models anymore, nothing left after the big training runs and trillion-dollar R&D, and no inference served.
I can run quite useful models on my PC. Might not change the world but I got a usable transcript of an old foreign language TV show and then machine translated to English. It is not as good as professional subtitles but i wasn't willing to pay the cost of that option.
There's a gazillion use cases for these things in business that aren't even beginning to be tapped yet. Demand for tokens should be practically unlimited for many years to come. Some of those ideas won't be financially viable but a lot will.
Consider how much software is out there that can now be translated into every (human) language continuously, opening up new customers and markets that were previously being ignored due to the logistical complexity and cost of hiring human translation teams. Inferencing that stuff is a no brainer but there's a lot of workflow and integration needed first which takes time.
The models get more efficient every year and consumer chips get more capable every year. A GPT-5 level model will be on every phone running locally in 5 years.
I run models for coding on my own machines. They’re a trivial expense compared to what I earn from the work I do.
The “at a loss” scenario comes from (1) training costs and (2) companies selling tokens below market to get market share. Neither of those imply that people won’t run models in future. Training new frontier-class models could potentially become an issue, but even that seems unlikely given what these models are capable of.
Is there a genuine use case for today's models, other than for identifying suckers? You can't even systematically apply an LLM to a list of text transformation tasks, because the ability to produce consistent results would make them less effective sycophants.
rjh29|4 months ago
WalterSear|4 months ago
harvey9|4 months ago
mike_hearn|4 months ago
Consider how much software is out there that can now be translated into every (human) language continuously, opening up new customers and markets that were previously being ignored due to the logistical complexity and cost of hiring human translation teams. Inferencing that stuff is a no brainer but there's a lot of workflow and integration needed first which takes time.
quesera|4 months ago
Creating new LLMs might be out of reach for all but very well-capitalized organizations with clear intentions, and governments.
There might be a viable market for SLMs though. Why does my model need to know about the Boer wars to generate usable code?
logicchains|4 months ago
dcre|4 months ago
qgin|4 months ago
antonvs|4 months ago
The “at a loss” scenario comes from (1) training costs and (2) companies selling tokens below market to get market share. Neither of those imply that people won’t run models in future. Training new frontier-class models could potentially become an issue, but even that seems unlikely given what these models are capable of.
myhf|4 months ago
Juliate|4 months ago
muldvarp|4 months ago