They're so obviously going to fail, but in a good way. The idea is that they were going to get the world addicted then raise the prices, but the reality is that there's going to be a race for the bottom in pricing because none of them are significantly better than the others. They don't own anything, it's just math; they can be undercut by an OSS bomb from China at any given moment.
Even worse, they've bet against the math not advancing. If it gets significantly more power-efficient, which literally could happen tomorrow if the right paper goes up on arxiv, maybe a 10 year old laptop could give "good enough" results. All those data centers are now trash and your companies are now worth a negative trillion dollars.
I think all of these factors are completely independent of whether AI works or not, or how well it works. Personally, I don't care if it replaces programmers: get another job. I just have experienced it, and it is at this point mediocre.
Of course I am not using the bleeding edge, and I am not privy to the top secret insider stuff which may well be orders of magnitude better. But if they've got it, why would they keep it a secret when people are desperate to give them money? If they're hiding it, it's something that they know that somebody could analyze and knock off, and then it's a race for the bottom again.
In a race for the bottom, we all win. Except the people and economies who bet their lives on it being a race to the top.
It’s funny reading this parallel world that some portion of people have constructed for themselves.
It has been three years and these tools can do a considerable portion of my day to day work. Salvage the wreckage? Unfortunately I think that many people’s jobs are essentially in the “Coyote running off a cliff but not realizing it yet” phase or soon to be.
I think this comment is reacting to a different argument than the one the article is actually making.
The piece isn’t claiming that AI tools are useless or that they don’t materially improve day-to-day work. In fact, it more or less assumes the opposite. The critique is about the economic and organizational story being told around AI, not about whether an individual developer can ship faster today.
Saying “these tools now do a considerable portion of my work” operates on the micro level of personal productivity. Doctorow is operating on the macro level: how firms reframe human labor as “automation,” push humans into oversight and liability roles, and use exaggerated autonomy claims to justify valuations, layoffs, and cost-cutting.
Ironically, the “Wile E. Coyote running off a cliff” metaphor aligns more with the article than against it. The whole “reverse centaur” idea is that jobs don’t disappear instantly; they degrade first. People keep running because the system still sort of works, until the ground is gone and the responsibility snaps back onto humans.
So there’s no contradiction between “this saves me hours a day” and “this is being oversold in ways that will destabilize jobs and business models.” Those two things can be true at the same time. The comment seems to rebut “AI doesn’t work,” which isn’t really the claim being made.
I think you’ll find the essay much more nuanced than that. It only incidentally discusses what you’re thinking about.
> Think of AI software generation: there are plenty of coders who love using AI. Using AI for simple tasks can genuinely make them more efficient and give them more time to do the fun part of coding, namely, solving really gnarly, abstract puzzles. But when you listen to business leaders talk about their AI plans for coders, it’s clear they are not hoping to make some centaurs.
I mostly do normal React crap, ostensibly the easiest thing for these tools to do, and these tools cannot do a considerable portion of my work. Yes I've used the latest model. Yes I've used the latest agentic IDE. Yes I've tweaked my prompts and added repository rule files. Yes I've done this approximately every three months for the last two years. This shit does not work. Nobody ever posts proof of it working well in any <great project>.
I am at the point where if I read something from a software developer like, "these tools can do a considerable portion of my day to day work", I have to just assume that person's day to day work was garbage. And this is not terribly surprising, because a lot of software developers I have personally worked with did produce mostly garbage. Some amount of those people are surely using AI and posting about it, and that explains what we continually see online. Sorry to any offended.
> It has been three years and these tools can do a considerable portion of my day to day work.
Agreed.
> Unfortunately I think that many people’s jobs are essentially in the “Coyote running off a cliff but not realizing it yet” phase or soon to be.
Eh… some people maybe. But history shows nearly every time a tool makes people more efficient, we get more jobs, not less. Jevon’s paradox and all that: https://en.wikipedia.org/wiki/Jevons_paradox
> AI is a statistical inference engine. All it can do is predict what word will come next based on all the words that have been typed in the past.
If we keep saying this hard enough over and over, maybe model capabilities will stop advancing.
Hey, there's even a causal story here! A million variations of this cope enter the pretraining data, the model decides the assistant character it's supposed to be playing really is dumb, human triumph follows. It's not _crazier_ than Roko's Basilisk.
There's a fundamental disconnect: OP refers to senior engineers being replaced with AI, whereas the evidence and logical reasoning points much more to junior engineers being replaced by AI. And that premise seems like a quite plausible one...
>OP refers to senior engineers being replaced with AI, whereas the evidence and logical reasoning points much more to junior engineers being replaced by AI.
If industry cared about future seniors, they'd invest in juniors. But that's not what's happening. AI will effectively replace seniors in 20 years with the current trajectory. Whether or not that replacement is adequate or not is the bigger question.
I think the junior thing started ~24, early ~25. Because back then the level of the current models was at or above that level, with somewhat flaky reliability. In the past year that's changed. We are now at "mostly reliable" in any junior-level stuff, and "surprisingly capable, maybe still needs some hand-holding" at advanced / senior-level stuff. And somewhat super-human if the problem is easily verifiable in a feedback loop (see the atcoder stuff).
The whole AGI industry is like one of those projects that claims "90% finished" from the time of the first demo, then for the next N years, all the way up until the project is eventually canceled.
Yeah we can spew out millions of lines of unmaintainable slop code! Now we can even write a slop unusable browser!
All this shit looks like progress, but it's all really a cover for lack of progress. And now we've got the entire economy as a bet on it.
None of this is to say there's nothing useful coming out of the industry. I use it productively for a ton of things. But, the reverse centaur thing is a great analogy. The money getting ploughed into it is assuming reverse centaur will be the final outcome, not a set of useful productivity tools. Once investors start to realize that all we're going to get out of it is the latter, we'll be in for a world of hurt.
Granted, one nice thing about the AI wave is that I bet it'll be able to keep slinging new and idiotic slop for decades that'll keep successfully unburdening investors from their money, because, "hey look, it's 90% finished!" Who knows, maybe that's the point.
It's disappointing how so many people blame AI for our problems. I see this pattern over and over; people never blame the socio-economic system and blame technology instead. Technological improvement is the only thing which allows us to survive the social, cultural and moral decline that we've been experiencing. People blame tech because it allows the system to be highly inefficient and still hold together. But if people blame tech, root issues will not be addressed.
I don’t think the article is blaming AI as a technology. It’s criticizing how the current socio-economic system uses AI.
The argument isn’t “tech is the problem,” but that autonomy narratives are used to shift risk, degrade labor, and justify valuations without real system-level productivity gains. That’s a critique of incentives and power structures, not of technological progress itself.
In that sense, “don’t blame tech, blame the system” is very close to the article’s point, not opposed to it.
It is pretty plain to see technology enables socio-economic disharmony, to say the least. While it may not be the "cause" it is certainly a potent accelerant.
I think technology has always been a tool to impose will over others. Computing was just such a unique kind of technology where, for a decently long time, only a subset of people knew how to use it, and that subset didn’t have existing wealth and power (or not enough). It’s taken upto now for the ones with real power to catch up, or a mix of the ones who didn’t now have real power. And they will use technology for what it ultimately is for, to impose their will on others.
Well it's kinda both. One step towards socio-economic change would be if everyone just stopped giving billionaires upwards of $200/month, and didn't have their companies give it to them on their behalf.
How to fix the human/society instead? Technology has enabled a lot of evil: the society that had guns came and colonized the society without, and made them slaves (here's the opening to argue that Genghis Khan managed to enslave many societies without guns). The rise of the Internet and online shopping ruined "main street shops". "Uber for ___" enabled the exploitative gig-economy with retirement meaning dropping dead...
Yeah, we're back to feudal lords having the power to control society, they can even easily buy governments... Seems like the problem is with neo-liberalist capitalism, without any controls coming from the society (i.e. democractically elected governments) it will maximize exploitation.
It’s so strange to see people accusing tech companies of using AI to concentrate power and wealth when thus far, AI has almost entirely been all consumer surplus. You have crazily high competition in the industry that allows you the consumer to use SOTA models for free, or even run them yourself.
My prediction is that this will keep going all the way to the AGI stage. Someone will release (or leak) an AGI capable model that’s able to design AI chips, as well as the Fabs needed to build them, as well as robots to build and operate the Fabs and robot factories and raw material mines and refineries.
> when thus far, AI has almost entirely been all consumer surplus.
Tell that to the 2025 job numbers. Who do you think benefits from a millipn+ layoffs? The consumers? The new grads who can't even get their career started?
OpenAI and Microsoft have defined AGI as a revenue number so yeah maybe using that definition.
I believe AGI will require the ability to self tune its own Neutral network coefficients which the current tech cannot do because I can’t deduce it’s own errors. Oh sorry “hallucinations”. Developing brains learn from both pain and verbal feedback (no, not food!) etc.
It’s an interesting problem where just telling a LLM model it’s wrong is not enough to adjust Billions of parameters with.
pessimizer|1 month ago
Even worse, they've bet against the math not advancing. If it gets significantly more power-efficient, which literally could happen tomorrow if the right paper goes up on arxiv, maybe a 10 year old laptop could give "good enough" results. All those data centers are now trash and your companies are now worth a negative trillion dollars.
I think all of these factors are completely independent of whether AI works or not, or how well it works. Personally, I don't care if it replaces programmers: get another job. I just have experienced it, and it is at this point mediocre.
Of course I am not using the bleeding edge, and I am not privy to the top secret insider stuff which may well be orders of magnitude better. But if they've got it, why would they keep it a secret when people are desperate to give them money? If they're hiding it, it's something that they know that somebody could analyze and knock off, and then it's a race for the bottom again.
In a race for the bottom, we all win. Except the people and economies who bet their lives on it being a race to the top.
johnnyanmac|1 month ago
In this economy? Restarting in a world that doesn't want to train is expensive at best and suicide at worst.
whimsicalism|1 month ago
It has been three years and these tools can do a considerable portion of my day to day work. Salvage the wreckage? Unfortunately I think that many people’s jobs are essentially in the “Coyote running off a cliff but not realizing it yet” phase or soon to be.
kuerbel|1 month ago
The piece isn’t claiming that AI tools are useless or that they don’t materially improve day-to-day work. In fact, it more or less assumes the opposite. The critique is about the economic and organizational story being told around AI, not about whether an individual developer can ship faster today.
Saying “these tools now do a considerable portion of my work” operates on the micro level of personal productivity. Doctorow is operating on the macro level: how firms reframe human labor as “automation,” push humans into oversight and liability roles, and use exaggerated autonomy claims to justify valuations, layoffs, and cost-cutting.
Ironically, the “Wile E. Coyote running off a cliff” metaphor aligns more with the article than against it. The whole “reverse centaur” idea is that jobs don’t disappear instantly; they degrade first. People keep running because the system still sort of works, until the ground is gone and the responsibility snaps back onto humans.
So there’s no contradiction between “this saves me hours a day” and “this is being oversold in ways that will destabilize jobs and business models.” Those two things can be true at the same time. The comment seems to rebut “AI doesn’t work,” which isn’t really the claim being made.
Sharlin|1 month ago
> Think of AI software generation: there are plenty of coders who love using AI. Using AI for simple tasks can genuinely make them more efficient and give them more time to do the fun part of coding, namely, solving really gnarly, abstract puzzles. But when you listen to business leaders talk about their AI plans for coders, it’s clear they are not hoping to make some centaurs.
meroes|1 month ago
Did other technologies get phrased this way? The accounting software is doing my work? The locomotive is doing my work?
DetectDefect|1 month ago
Is this really something you want to have proudly said? Because it makes it sound like your "work" is not very important.
1vuio0pswjnm7|1 month ago
Funnily enough, there was a story today in the WSJ about "a parallel world some portion of people have constructed for themselves"
Why the Tech World Thinks the American Dream Is Dying
https://www.wsj.com/tech/ai/why-the-tech-world-thinks-the-am...
eudamoniac|1 month ago
I am at the point where if I read something from a software developer like, "these tools can do a considerable portion of my day to day work", I have to just assume that person's day to day work was garbage. And this is not terribly surprising, because a lot of software developers I have personally worked with did produce mostly garbage. Some amount of those people are surely using AI and posting about it, and that explains what we continually see online. Sorry to any offended.
unknown|1 month ago
[deleted]
an0malous|1 month ago
widowlark|1 month ago
widowlark|1 month ago
ninkendo|1 month ago
Agreed.
> Unfortunately I think that many people’s jobs are essentially in the “Coyote running off a cliff but not realizing it yet” phase or soon to be.
Eh… some people maybe. But history shows nearly every time a tool makes people more efficient, we get more jobs, not less. Jevon’s paradox and all that: https://en.wikipedia.org/wiki/Jevons_paradox
IAmGraydon|1 month ago
[deleted]
kalkin|1 month ago
If we keep saying this hard enough over and over, maybe model capabilities will stop advancing.
Hey, there's even a causal story here! A million variations of this cope enter the pretraining data, the model decides the assistant character it's supposed to be playing really is dumb, human triumph follows. It's not _crazier_ than Roko's Basilisk.
dayofthedaleks|1 month ago
https://archive.is/ctuVG
jader201|1 month ago
Why should we make an exception in this case?
winstonwinston|1 month ago
ahachete|1 month ago
johnnyanmac|1 month ago
If industry cared about future seniors, they'd invest in juniors. But that's not what's happening. AI will effectively replace seniors in 20 years with the current trajectory. Whether or not that replacement is adequate or not is the bigger question.
NitpickLawyer|1 month ago
1vuio0pswjnm7|1 month ago
The author should have a team of programmers trying to implement some of these alternatives
I'm confident "new method of collecting data, performing surveillance and providing ad services" would not be one of them
Programmers are generally not good sources of novel ideas
They tend to focus on copying ("implementing") the ideas of others
Today's "AI", designed by programmers, IMO (other opinions may differ), is an automated form of copying
stephc_int13|1 month ago
takeda|1 month ago
A lot of people who talk about massive gains seem to forget about code review.
Then in a company someone who has to review is is f*cked because that code is much more complex and takes much longer to review.
daxfohl|1 month ago
Yeah we can spew out millions of lines of unmaintainable slop code! Now we can even write a slop unusable browser!
All this shit looks like progress, but it's all really a cover for lack of progress. And now we've got the entire economy as a bet on it.
None of this is to say there's nothing useful coming out of the industry. I use it productively for a ton of things. But, the reverse centaur thing is a great analogy. The money getting ploughed into it is assuming reverse centaur will be the final outcome, not a set of useful productivity tools. Once investors start to realize that all we're going to get out of it is the latter, we'll be in for a world of hurt.
Granted, one nice thing about the AI wave is that I bet it'll be able to keep slinging new and idiotic slop for decades that'll keep successfully unburdening investors from their money, because, "hey look, it's 90% finished!" Who knows, maybe that's the point.
jongjong|1 month ago
kuerbel|1 month ago
The argument isn’t “tech is the problem,” but that autonomy narratives are used to shift risk, degrade labor, and justify valuations without real system-level productivity gains. That’s a critique of incentives and power structures, not of technological progress itself.
In that sense, “don’t blame tech, blame the system” is very close to the article’s point, not opposed to it.
DetectDefect|1 month ago
ares623|1 month ago
Avicebron|1 month ago
Agreed. I think people would be open to suggestions if you have actionable ways to improve the current socio-economic system.
thundergolfer|1 month ago
Read the article.
sodapopcan|1 month ago
SideburnsOfDoom|1 month ago
If by "people" you mean "Cory Doctorow, the author of the article", then you really don't know anything about their work.
For example, he coined the term "enshitifacation" and talks often about the "enshitogenic policy environment" that gives rise to it.
netsharc|1 month ago
Yeah, we're back to feudal lords having the power to control society, they can even easily buy governments... Seems like the problem is with neo-liberalist capitalism, without any controls coming from the society (i.e. democractically elected governments) it will maximize exploitation.
baron816|1 month ago
My prediction is that this will keep going all the way to the AGI stage. Someone will release (or leak) an AGI capable model that’s able to design AI chips, as well as the Fabs needed to build them, as well as robots to build and operate the Fabs and robot factories and raw material mines and refineries.
johnnyanmac|1 month ago
Tell that to the 2025 job numbers. Who do you think benefits from a millipn+ layoffs? The consumers? The new grads who can't even get their career started?
Eggpants|1 month ago
I believe AGI will require the ability to self tune its own Neutral network coefficients which the current tech cannot do because I can’t deduce it’s own errors. Oh sorry “hallucinations”. Developing brains learn from both pain and verbal feedback (no, not food!) etc.
It’s an interesting problem where just telling a LLM model it’s wrong is not enough to adjust Billions of parameters with.
baron816|1 month ago
>Google and Meta control the ad market. Google and Apple control the mobile market,
“Tech companies are monopolies”, proceeds to describe how tech companies compete with each other.
hippo22|1 month ago
hamdingers|1 month ago