My current intuition on this topic is that they are right about scaling but they are training on the wrong data.
LLMs were not intended to be the core foundation of artificial intelligence but an experiment around deep learning and language. Its success was an almost accidental byproduct of the availability of large amount of structured data to train from and the natural human bias to be tricked by language (Eliza effect).
But human language itself is quite weak from a cognitive perspective and we end up with an extremely broad but shallow and brittle model. The recent and extremely costly attempts to build reasoning around don't seem much more promising than using a lot of hardcoded heuristics, basically ignoring the bitter lesson.
I've seen many argue that a real human level AI should be trained from real-world experience, I am not sure this is true, but training should likely start from lower-level data than language, still using tokens and huge scale, and probably deeper networks.
Not all AI is LLMs. That's just what's most prevalent right now. There's still great work being done by models that don't "speak" but "perform". The issue is they need to be trained to perform like you said. The more tools like Claude Code are used, the more training they receive as well. I do think we'll see a plateau (if we haven't reached it already) of diminishing returns and we'll seek out new algorithms to improve it.
Never underestimate the will of someone determined to gain an extra 10% performance or accuracy. It's the last 1% I worry about. 99.99% uptime is great until it isn't. 99% accuracy is great until it isn't. These things could be mitigated by running inference on different quantinizations of a model tree but ultimately we're going to have to triple check the work somehow.
Definitely smarter people than me have thought about this already, but I’ve been trying to think about human language and how thoughts form in my head lately. How does thinking feel to you?
I feel like thoughts appear in my head conceptually mostly formed, but then I start sequentially coming up with sentences to express them, almost as if I’m writing them down for somebody else. In that process, I edit a bunch, so the final thought is influenced quite a bit by how English ends to be written. Maybe even constrained by expressability in English. But English has the ability to express fuzzy concepts. And the kernel started as a more intuitive thing.
What happens is they go out of business: "these firms spent five hundred and sixty billion dollars on A.I.-related capital expenditures in the past eighteen months, while their A.I. revenues were only about thirty-five billion."
DeepSeek (and the like) will prevent the kind of price increases necessary for them to pay back hundreds of billions of dollars already spent, much less pay for more. If they don't find a way to make LLMs do significantly more than they do thus far, and a market willing to pay hundreds of billions of dollars for them to do it, and some kind of "moat" to prevent DeepSeek and the like from undercutting them, they will collapse under the weight of their own expenses.
DeepSeek is also undercutting itself. No one is making a profit here, everyone is trying to gobble market share. Even if you have the best model and don't care to make a dime, inference is very expensive.
> If they don't find a way to make LLMs do significantly more than they do thus far...
They only need two things, really: A large user base and a way to include advertising in the responses. The market willing to pay hundreds of billions of dollars will soon follow.
The businesses are currently in the user base building stage. Hemorrhaging money to get them is simply the cost of doing business. Once they feel that is stable, adding advertising is relatively easy.
> and some kind of "moat" to prevent DeepSeek and the like from undercutting them*
Once users are accustomed to using a service, you have to do some pretty horrendous things to get them to leave. "Give me your best hamburger recipe" -> "Sure, here is my best burger recipe [...] However, if you don't feel like cooking tonight, give the Big Mac a try!". wouldn't be enough to see any meaningful loss of users.
We did a test of GPT5 yesterday. We asked it to generate a synopsis of a scientific topic and cite sources. We then checked those sources. GPT5 still hallucinated 65% of the citations.
It did things like:
Make up the paper title
Make up the authors for a real paper title
Mix a real title and a real journal
If it can't even reference real papers it certainly can't be trusted to match up claims of fact with real sources.
Current AI tools generate citations that LOOK real but ARE fake. This might not be solvable inside the LLM. If anyone could do it, it'd be OpenAI. (OK maybe I'm giving them too much credit, but they have a crap-ton of money and seem to show a real interest in making their AI better)
If it can't be done in the LLM we can't trust LLMs basically ever.
I suppose there's a pretty big loophole here. Doing it outside the LLM but INSIDE the LLM product would be good enough.
The first AI tool to incorporate that (internal citation and claim checking) will win because if the AI can check itself and prevent hallucinated garbage from ever reaching the user we can start to trust them and then they can do everything we've been promised.
Until that day comes we can't trust them for anything.
Google already did this, give free gemini deepresearch a spin. It's not perfect, but I have a feeling you'll be surprised if this is your honest impression.
The title is irritating, conflating AI with LLMs. LLMs are a subset of AI. I expect future systems will be mobs of expert AI agents rather than relying on LLMs to do everything. An LLM will likely be in the mix for at least the natural language processing but I wouldn't bet the farm on them alone.
That battle was long-ago lost when the leading LLM companies and organizations insisted on referring to their products and models solely as "AI", not the more-specific "LLMs". Implementers of that technology followed suit, and that's just what it means now.
You can't blame the New Yorker for using the term in its modern, common parlance.
The computing power alone of all these gpus would bring a revolution in simulation software. I mean 0 AI/machine-learning, just being able to simulate much more things than we can.
Most industry-specific simulation software is REALLY crap, most from the 90s and 80s and barely evolved since then. Many stuck on single core CPUs.
If the New Yorker published a story titled "What if LLMs Don't Get Better Than This?" I expect the portion of their readers who understood what that title meant would be pretty tiny.
AI is what people think AI is. In the 80s, that was expert systems. In the 2000s, it was machine learning (not expert systems). Now, it is LLMs — not machine learning.
You can complain, but it’s like that old man shaking their fist at the clouds.
The title annoys me more because if doesn't mention anything about time. AI will almost certainly get a good bit better eventually. The questions will it in the next couple of years or will we have to wait for some breakthrough.
I'm amused they seem to refer to Marcus and Zitron as "these moderate views of A.I". They are both pretty much professional skeptics who seem to fill their days writing AI is rubbish articles.
AI is LLMs now. Similar to how machine learning became AI 5-10 years ago.
I'm not endorsing this, just stating an observation.
I do a lot of deep learning for computer vision, which became AI a while ago. Now, when you use the word AI in this context, it will confuse people because it doesn't involve LLMs.
> You didn’t need a bar chart to recognize that GPT-4 had leaped ahead of anything that had come before.
You did though. I remember when GPT-4 was announced, OpenAI downplayed it and Altman said the difference was subtle and wouldn't be immediately apparent. For a lot of the stuff ChatGPT was being used for the gap between 3 and 4 wasn't going to really leap out at you.
In the lead up to the announcement, Altman has set the bar low by suggesting people will be disappointed and telling his Twitter followers that “we really appreciate feedback on its shortcomings.”
OpenAI described the distinction between GPT-3.5—the previous version of the technology—and GPT 4, as subtle in situations when users are having a “casual conversation” with the technology. “The difference comes out when the complexity of the task reaches a sufficient threshold—GPT-4 is more reliable, creative, and able to handle much more nuanced instructions than GPT-3.5,” a research blog post read.
In the years since we got a lot more demanding of our models. Back then people were happy if they got models to write a small simple function and it worked. Now they expect models to manipulate large production codebases and get it right first time. So, the difference between GPT-3 and GPT-4 would be more apparent. But at the time, the reaction was somewhat muted.
> Back then people were happy if they got models to write a small simple function and it worked. Now they expect models to manipulate large production codebases and get it right first time.
This push is mostly coming from the C-level and the hustler types, both of which need this to work out in order for their employeeless corporation fantasy to work out.
OpenAi has 700+ million users. Sam recently said only 7% of Plus users were using thinking (o3)!!! That means 93% of their users were using nothing but 4o!
Clearly the OpenAi leadership saw these stats and understood the main initial goal of GPT5 is to introduce this auto-router, and not go all in on intelligence for the 3-7% who care to use it.
This is a genius move IMO, and will get tons of users to flood to ChatGPT over competitors. Grok, Gemini, etc are now fighting over scraps of the top 1% while OpenAi is going after the blue ocean of users.
> Sam recently said only 7% of Plus users were using thinking (o3)
Thinking or just o3, and over what timeframe? There were a lot of days where I would just rely on o4-mini and o4-mini (high) b.c. my queries weren't that complex and I wanted to save my o3 quota and get faster responses.
> That means 93% of their users were using nothing but 4o!
You can't say that with any certainty, but I personally share the impression that growth has not kept up with the hype of 2023. Take the following for example. That's an article from April 2023, that strongly implies that the next version of GPT would be so much more powerful than the current one that it would be dangerous to work on or even release.
Altman specifically used the version number "GPT5" back then. GPT5 is quite good, but is it the kind of technology that requires a word-wide moratorium on its development, lest it make humanity redundant?
"""
(Friedman) asked Altman for his thoughts on the recently released and widely circulated open letter demanding an AI pause. In response, the OpenAI founder shared some of his critiques. “An earlier version of the letter claimed OpenAI is training GPT-5 right now. We are not, and won’t for some time,” Altman noted. “So in that sense, [the letter] was sort of silly.”
But, GPT-5 or not, Altman’s statement isn’t likely to be particularly reassuring to AI’s critiques, as first pointed out in a report from the Verge. The tech founder followed up his “no GPT-5″ announcement by immediately clarifying that upgrades and updates are in the works for GPT-4. There are ways to increase a technologies’ capacity beyond releasing an official, higher-number version of it.
My personal test question keeps bombing, and I think it's something they should be capable of doing?
Are those math contests? Are their questions and answers in the training set?
Let's say that these things really won a math Olympiad by thinking. Ok, I would like it to to write parsers based on a well defined expression or language spec. Not as bad as near unparseable C++ or JavaScript.
The AIs refuse, despite the prompt, to write a complete parser, hallucinate tests, do things like just call the already working compiler on the CLI, force repetitive reprompts that still won't complete the task.
To me, this is a good example of a task I would give AI as a service to see if it will reliably do something that's well specified, moderately annoying, and is most definitely in the training set if they are pulling data from "the internet".
AI doesn't need to get better than this. It is already saving millions of hours of previously wasted human productivity. The biggest threat to these companies if their products do not improve is the local running of LLMS. That would finally justify consumers buying more memory and processor speed.
AI getting better is like maybe 50% or less of the equation. The other part is the infrastructure supporting AI applications. The infrastructure and interfaces that need to be built to fully take advantage of whats already here already has a long way to catch up.
It appears that Cal Newport has decided to be the one to most publicly initiate the inevitable Trough Of Disillusionment stage of the Hype Cycle. I'm not sure it'll last very long, though, considering (for starters) Google DeepMind's gold medal at the recent International Math Olympiad. Also, while he criticizes the cost-cutting measure which is ChatGPT 5, he doesn't even mention ChatGPT 5 Pro, which is performing excellently.
I always expect things like this to eventually deliver about 90% of what they promise, which turns out to be 100% for some niche uses, and for the rest it just gets abandoned because 90% isn't good enough. Like when voice recognition became super hyped in the late 90's, it was going to change how the world interacts with machines, and eventually it turned into "Hey Siri"
Which to me means, what's the Big o of this entire venture?
Ultimately, what they need to do is add nines of reliability. I guess I could argue that what they are producing now is like two nines: 99% accuracy.
Of course, that depends on how you measure it and yada yada yada. So for things like self-driving, I could see how people could argue that the accuracy rate is 99.9% on a minute by minute basis.
But how many nines do you need? Especially for self-driving five more? What's the computational cost to achieve that? Is it just five times? Is it 25 times? Is it two to the five power?
That's completely possible if the development of LLMs follow an S-curve (sigmoidal). At the beginning of the curve it will look exponential, then linear, and finally logarithmic. For different tasks, LLMs could be on different points on the curve, which would explain why some people perceive the improvements as exponential and others perceive them as logarithmic - they are simply working on different things and so experience different gradients.
They didn't answer much of the "What if," though... Am just imagining the massive financial losses taken by so many, and if a bailout becomes necessary, because too-big-to-fail now means Microsoft, Google, Facebook et al since we transferred so much of financial engineering economics onto them since '08.
Those three companies have products outside AI and won't die quickly. The ones that will collapse are betting exclusively on improvements in AI. It will be fun to watch the VC money burn.
Last time I've checked each of these companies were still hugely profitable. So it's not going to be your average FANG in trouble here, but rather VCs and others who've jumped onto the AI-craze
AI is so new and so powerful, that we don't really know how to use it yet. The next step is orchestration. LLMs are already powerful but they need to be scaled horizontally. "One shotting" something with a single call to an LLM should never be expected to work. That's not how the human brain works. We iterate, we collaborate with others, we reflect... We've already unlocked the hard and "mysterious" part, now we just need time to orchestrate and network it.
I think you’re right - even if we accept the premise that there’s only room for minor marginal improvements, there’s vast amounts of room for improvement with integrations, mcp, orchestration, prompting, etc. I’m talking mostly about coding agents here but it applies more widely.
It’s a completely new tool, it’s like inventing the internal combustion engine and then going, “well, I guess that’s it, it’s kinda neat I guess.”
I think that's it. Even if there were no improvements with LLMs as they exist today, the integration and usage can still be vastly improved. Right now, we don't have multiple LLM-aware systems communicating, with a standardized information repository.
Right now we have the technology to have an AI observe a room, count the people in it, see what they're doing, observe their mood, and set the lighting to the appropriate level. We just don't have all the sensors and integrations and protocols to manage that. The LLM interfaces with email, your bank, your phone, etc., is crude and clunky. So much more could be done with the LLMs we have now.
(And just to be clear, most of those integrations sound horrible and dystopian. But they're examples.)
Powerful but we don't know how to use it?
If it is as powerful as all you true believers spout the usefulness would be self evident and that would be the display of its power.
But apparently it is powerful just because you say so, and then something, something ... business model ...
Wow did you encapsulate millenia of management-labor disputes by saying don't worry be happy?
Let's play the same game with totalitarianism!
It's the fear they are watching everything
It's the fear nobody is watching at all
Oh wow, I totally understand the threat of totalitarianism from that.
And I bring up totalitarianism quite in particular, because aside from vastly empowering the elites in the war against labor, AI vastly empowers the elites for totalitarian monitoring and control.
stephc_int13|6 months ago
LLMs were not intended to be the core foundation of artificial intelligence but an experiment around deep learning and language. Its success was an almost accidental byproduct of the availability of large amount of structured data to train from and the natural human bias to be tricked by language (Eliza effect).
But human language itself is quite weak from a cognitive perspective and we end up with an extremely broad but shallow and brittle model. The recent and extremely costly attempts to build reasoning around don't seem much more promising than using a lot of hardcoded heuristics, basically ignoring the bitter lesson.
I've seen many argue that a real human level AI should be trained from real-world experience, I am not sure this is true, but training should likely start from lower-level data than language, still using tokens and huge scale, and probably deeper networks.
reactordev|6 months ago
Never underestimate the will of someone determined to gain an extra 10% performance or accuracy. It's the last 1% I worry about. 99.99% uptime is great until it isn't. 99% accuracy is great until it isn't. These things could be mitigated by running inference on different quantinizations of a model tree but ultimately we're going to have to triple check the work somehow.
bee_rider|6 months ago
I feel like thoughts appear in my head conceptually mostly formed, but then I start sequentially coming up with sentences to express them, almost as if I’m writing them down for somebody else. In that process, I edit a bunch, so the final thought is influenced quite a bit by how English ends to be written. Maybe even constrained by expressability in English. But English has the ability to express fuzzy concepts. And the kernel started as a more intuitive thing.
It is a weird interplay.
rossdavidh|6 months ago
DeepSeek (and the like) will prevent the kind of price increases necessary for them to pay back hundreds of billions of dollars already spent, much less pay for more. If they don't find a way to make LLMs do significantly more than they do thus far, and a market willing to pay hundreds of billions of dollars for them to do it, and some kind of "moat" to prevent DeepSeek and the like from undercutting them, they will collapse under the weight of their own expenses.
mgfist|6 months ago
9rx|6 months ago
They only need two things, really: A large user base and a way to include advertising in the responses. The market willing to pay hundreds of billions of dollars will soon follow.
The businesses are currently in the user base building stage. Hemorrhaging money to get them is simply the cost of doing business. Once they feel that is stable, adding advertising is relatively easy.
> and some kind of "moat" to prevent DeepSeek and the like from undercutting them*
Once users are accustomed to using a service, you have to do some pretty horrendous things to get them to leave. "Give me your best hamburger recipe" -> "Sure, here is my best burger recipe [...] However, if you don't feel like cooking tonight, give the Big Mac a try!". wouldn't be enough to see any meaningful loss of users.
unknown|6 months ago
[deleted]
ml_more|6 months ago
Current AI tools generate citations that LOOK real but ARE fake. This might not be solvable inside the LLM. If anyone could do it, it'd be OpenAI. (OK maybe I'm giving them too much credit, but they have a crap-ton of money and seem to show a real interest in making their AI better)
If it can't be done in the LLM we can't trust LLMs basically ever. I suppose there's a pretty big loophole here. Doing it outside the LLM but INSIDE the LLM product would be good enough.
The first AI tool to incorporate that (internal citation and claim checking) will win because if the AI can check itself and prevent hallucinated garbage from ever reaching the user we can start to trust them and then they can do everything we've been promised. Until that day comes we can't trust them for anything.
beacon294|6 months ago
brainwipe|6 months ago
DanHulton|6 months ago
You can't blame the New Yorker for using the term in its modern, common parlance.
DanielHB|6 months ago
Most industry-specific simulation software is REALLY crap, most from the 90s and 80s and barely evolved since then. Many stuck on single core CPUs.
simonw|6 months ago
dr_dshiv|6 months ago
You can complain, but it’s like that old man shaking their fist at the clouds.
Now, if you want to talk about cybernetics…
tim333|6 months ago
I'm amused they seem to refer to Marcus and Zitron as "these moderate views of A.I". They are both pretty much professional skeptics who seem to fill their days writing AI is rubbish articles.
svara|6 months ago
I'm not endorsing this, just stating an observation.
I do a lot of deep learning for computer vision, which became AI a while ago. Now, when you use the word AI in this context, it will confuse people because it doesn't involve LLMs.
lokar|6 months ago
qcnguy|6 months ago
You did though. I remember when GPT-4 was announced, OpenAI downplayed it and Altman said the difference was subtle and wouldn't be immediately apparent. For a lot of the stuff ChatGPT was being used for the gap between 3 and 4 wasn't going to really leap out at you.
https://fortune.com/2023/03/14/openai-releases-gpt-4-improve...
In the lead up to the announcement, Altman has set the bar low by suggesting people will be disappointed and telling his Twitter followers that “we really appreciate feedback on its shortcomings.”
OpenAI described the distinction between GPT-3.5—the previous version of the technology—and GPT 4, as subtle in situations when users are having a “casual conversation” with the technology. “The difference comes out when the complexity of the task reaches a sufficient threshold—GPT-4 is more reliable, creative, and able to handle much more nuanced instructions than GPT-3.5,” a research blog post read.
In the years since we got a lot more demanding of our models. Back then people were happy if they got models to write a small simple function and it worked. Now they expect models to manipulate large production codebases and get it right first time. So, the difference between GPT-3 and GPT-4 would be more apparent. But at the time, the reaction was somewhat muted.
supriyo-biswas|6 months ago
This push is mostly coming from the C-level and the hustler types, both of which need this to work out in order for their employeeless corporation fantasy to work out.
EcommerceFlow|6 months ago
Clearly the OpenAi leadership saw these stats and understood the main initial goal of GPT5 is to introduce this auto-router, and not go all in on intelligence for the 3-7% who care to use it.
This is a genius move IMO, and will get tons of users to flood to ChatGPT over competitors. Grok, Gemini, etc are now fighting over scraps of the top 1% while OpenAi is going after the blue ocean of users.
throwaway0123_5|6 months ago
Thinking or just o3, and over what timeframe? There were a lot of days where I would just rely on o4-mini and o4-mini (high) b.c. my queries weren't that complex and I wanted to save my o3 quota and get faster responses.
> That means 93% of their users were using nothing but 4o!
Also potentially 4.1 and 4.5?
SideburnsOfDoom|6 months ago
energy123|6 months ago
How can you say progress has stalled without having visibility on the compute costs of gpt-5 relative to o3?
How can you say progress has stalled by referring to changes in benchmarks at the frontier over just 3.5 months?
svara|6 months ago
Altman specifically used the version number "GPT5" back then. GPT5 is quite good, but is it the kind of technology that requires a word-wide moratorium on its development, lest it make humanity redundant?
"""
(Friedman) asked Altman for his thoughts on the recently released and widely circulated open letter demanding an AI pause. In response, the OpenAI founder shared some of his critiques. “An earlier version of the letter claimed OpenAI is training GPT-5 right now. We are not, and won’t for some time,” Altman noted. “So in that sense, [the letter] was sort of silly.”
But, GPT-5 or not, Altman’s statement isn’t likely to be particularly reassuring to AI’s critiques, as first pointed out in a report from the Verge. The tech founder followed up his “no GPT-5″ announcement by immediately clarifying that upgrades and updates are in the works for GPT-4. There are ways to increase a technologies’ capacity beyond releasing an official, higher-number version of it.
"""
(from: https://gizmodo.com/sam-altman-open-ai-chatbot-gpt4-gpt5-185...)
rudedogg|6 months ago
The rate of improvement has slowed significantly. And chasing benchmarks is making everything worse IMO. Opus 4.1 is worse than Sonnet 3.7 to me :/.
I think the future will be:
1. Ads and quantization/routing to chase profits
2. Local models start taking over. New companies will slide in without the huge losses and provide what Claude/OpenAI do today at reasonable margins
3. Apple/Google eat up lots of the market by shipping good-enough models with iOS/Android
AtlasBarfed|6 months ago
Are those math contests? Are their questions and answers in the training set?
Let's say that these things really won a math Olympiad by thinking. Ok, I would like it to to write parsers based on a well defined expression or language spec. Not as bad as near unparseable C++ or JavaScript.
The AIs refuse, despite the prompt, to write a complete parser, hallucinate tests, do things like just call the already working compiler on the CLI, force repetitive reprompts that still won't complete the task.
To me, this is a good example of a task I would give AI as a service to see if it will reliably do something that's well specified, moderately annoying, and is most definitely in the training set if they are pulling data from "the internet".
Mistletoe|6 months ago
https://www.currentmarketvaluation.com/models/s&p500-mean-re...
https://www.cell.com/fulltext/S0092-8674(00)80089-6
danjl|6 months ago
zahirbmirza|6 months ago
puppycodes|6 months ago
garyrob|6 months ago
KevinMS|6 months ago
k__|6 months ago
In the one side I read stuff about exponential gains with every new model. On the other side, the coding improvements look logarithmic to me.
AtlasBarfed|6 months ago
Ultimately, what they need to do is add nines of reliability. I guess I could argue that what they are producing now is like two nines: 99% accuracy.
Of course, that depends on how you measure it and yada yada yada. So for things like self-driving, I could see how people could argue that the accuracy rate is 99.9% on a minute by minute basis.
But how many nines do you need? Especially for self-driving five more? What's the computational cost to achieve that? Is it just five times? Is it 25 times? Is it two to the five power?
microtonal|6 months ago
latexr|6 months ago
Ekshef|6 months ago
kerblang|6 months ago
danjl|6 months ago
BeFlatXIII|6 months ago
woodpanel|6 months ago
alecco|6 months ago
scotty79|6 months ago
behole|6 months ago
bbqfog|6 months ago
monkpit|6 months ago
It’s a completely new tool, it’s like inventing the internal combustion engine and then going, “well, I guess that’s it, it’s kinda neat I guess.”
kbelder|6 months ago
Right now we have the technology to have an AI observe a room, count the people in it, see what they're doing, observe their mood, and set the lighting to the appropriate level. We just don't have all the sensors and integrations and protocols to manage that. The LLM interfaces with email, your bank, your phone, etc., is crude and clunky. So much more could be done with the LLMs we have now.
(And just to be clear, most of those integrations sound horrible and dystopian. But they're examples.)
player1234|6 months ago
But apparently it is powerful just because you say so, and then something, something ... business model ...
varelse|6 months ago
[deleted]
varelse|6 months ago
[deleted]
fuzzfactor|6 months ago
What if it does?
There's a certain type of fear . . .
-- David FahlSame fear, different day.
AtlasBarfed|6 months ago
Let's play the same game with totalitarianism!
It's the fear they are watching everything
It's the fear nobody is watching at all
Oh wow, I totally understand the threat of totalitarianism from that.
And I bring up totalitarianism quite in particular, because aside from vastly empowering the elites in the war against labor, AI vastly empowers the elites for totalitarian monitoring and control.
piskov|6 months ago
I mean look at the first plane, then first air-jets: it’s understandable to assume we would travel the galaxy in something like 2050.
Meanwhile planes are basically the same last 60 years.
LLMs are great but I firmly believe that in 2100 all is basically the same as in 2020: no free energy (fusion), no AGI.
lenerdenator|6 months ago
If you provide people with that they typically shut up and stay out of the way. Everyone should be more afraid of the former than the latter.