Compared to my salary, the current cost of the models and tokens to do the work I normally would, is around 10%-25% of it.
Obviously, you still need someone to prevent the models from going insane and messing everything up, but in my experience (webdev projects, DevOps stuff, local software, well known domains), it is very much a force multiplier, as long as you acknowledge that you really need tests and various prebuild scripts.
So I predict one of two things happening:
A) de-valuation of software development work in well-explored domains (and perhaps some changes in regards to outsourcing, as long as cultural and communication differences can be compensated for); with the implications for those learning programming now
B) the squeeze coming in the other direction, making inference 3-5x more expensive, though maybe not with how every big org out there is trying to be a loss leader
Either way, it's an interesting direction - instead of ever becoming "proper" engineering (outside of RFCs and foundational stuff), we went from React/Vue/Angular/Svelte/Express.js/Laravel/Django/Rails/ASP.NET/Spring wild west and frameworks of the day (never being able to nail down what "good practices" are and stick to them for decades, but chasing the new thing forevermore), to even closer to producing non-deterministic slop, except that the slop kinda sorta works. Wild times.
But it is true, the cost is effectively zero. There will be, for a long time, free models available and any one of them will give you code back, always!
They never refuse. Worst case scenario the good models ask for clarification.
The cost for producing code is zero and code producers are in a really bad spot.
I am thinking about this a lot right now. Pretty existential stuff.
I think builders are gonna be fine. The type of programmer were people would put up with just because they could really go in their cave for a few days and come out with a bug fix that nobody else on the team could figure out is going to have a hard time.
Interestingly AI coding is really good at that sort of thing and less good at fully grasping user requirements or big picture systems. Basically things that we had to sit in meetings a lot for.
This has been my experience too. That insane race condition inside the language runtime that is completely inscrutable? Claude one-shots it. Ask it to work on that same logic to add features and it will happily introduce race conditions that are obvious to an engineer but a local test will never uncover.
I’m not convinced. That sort of thing usually depends on some very specific arcana or weird interaction between systems that is not in the code. It usually requires either external knowledge or deep investigation and compilation of evidence from multiple sources. I haven’t seen AI do that much.
Look at recent examples of browsers and matrix servers. AI can’t even follow extremely detailed specs with extensive test suites.
If anything, nice and friendly but mediocre devs are in more immediate danger than rough but extremely competent devs.
But we’ve seen C-suits losing institutional knowledge at a drop of a hat for decades so who knows? Maybe knowledge and skill are not that valued.
> The type of programmer were people would put up with just because they could really go in their cave for a few days and come out with a bug fix that nobody else on the team could figure out is going to have a hard time.
meetings hardly reach anywhere. most of the details are eventually figured out by developers when interacting with the code. If all ideas from PMs are implemented in a software, it would eventually turn into bloatware before even reaching MVP stage.
Not really, in my experience you still have to be good at solving problems to use it effectively. Claude (and other AI) can help folks find a "fix", but a lot of times it's a band-aid if the user doesn't understand how to debug / solve things themselves.
So the type of programmers you're talking about, who could solve complex problems, are actually just enhanced by it.
> The type of programmer were people would put up with just because they could really go in their cave for a few days and come out with a bug fix that nobody else on the team could figure out is going to have a hard time.
This is the exact type of programmer that isn't going to have any issues - ones who actually know what they're doing and aren't just going to vibecode react slop.
With all due respect to the author, this is a lot of words for not much substance. Rehashing the same thoughts everyone already thinks but not being bold enough to make a concrete prediction.
This is the time for bold predictions, you’ve just told us we’re in a crucible moment yet you end the article passively….
I have a theory: I think the recent advance in coding agent has shocked everyone. It's something of such unheard-of novelty that everyone thinks they've discovered something profound. Naturally, they all think they are the only ones in on it and feel the need to share. But in reality, it's already public knowledge, so they add no value. I've been in this trap many times in the last couple years.
- Small companies using AI are going to kick the sh*t out of large companies that are slow to adapt.
- LLMs will penetrate more areas of our lives. Closer to the STTNG computer. They will be agents in the real life sense and possibly in the physical world as well (robots).
- ASICs will eat nVidia's lunch.
- We will see an explosion of software and we will also see more jobs for people who are able to maintain all this software (using AI tools). There is going to be a lot more custom software for very specific purposes.
Here is my bold prediction: 2026 is the year where companies start the lay offs.
2026 is the year where we all realise that we can be our own company and build the stuff in our dreams rather than the mundane crap we do at work.
Honestly I am optimistic about computing in general. Llms will open things up for novices and experts alike. We can move into I the fields where we can use our brain power... But all we need is enough memory and compute to control our destiny....
I own (cofounded) a medium-sized saas business with hundreds of employees. I maintain final decision of everything technical and still code every day because it’s important. All of the engineers use LLM tools, you’d be stupid not to. But I need good engineers, I replace good engineers when someone leaves, and the business itself is so much larger than just programming. The system is just so huge and complex, and I am the benevolent dictator that architected it and maintains the core design decisions, that the LLM does not replace the need for engineers nor my own expertise.
Furthermore if we were truly in the utopia the author describes, why do all the LLM companies employ (and pay top dollar) for so many engineers? Why does OpenAI pay for slack when they could just vibe code a chat app in an hour?
The challenge of building a real, valuable software business (or any business) is so much harder than using LLM to prompt “build me a successful software business”
Has there been any good and useful software created with LLMs or any increase in software quality that we can actually look at?
So far it's just AI doom posting, hype bloggers that haven't shipped anything, anecdotes without evidence, increase in CVEs, increase in outages, and degraded software quality.
Software quality has been degrading for decades without LLMs though.
I only have anecdotal evidence from some engineers I know that they don't write software by hand any more. Provided the software they are working on was useful before, we can say that LLMs are writing useful software now.
If all you were doing is taking requirements from someone else and poorly coding them up (and yes I know a decent % of the industry matches close to this) then yes you are obsolete. Something just as useless but much faster now here.
If you are part of the requirements process. If you find problems to solve and solve them. If you push back on requirements when they are not reasonable. Etc. Then you still have a career and I don't see anything coming for you soon.
> If all you were doing is taking requirements from someone else and poorly coding them up
So, in your entire career, you've always worked in companies where you were a subject matter expert on everything the company did? Always knew the business domain inside out? You were running the numbers, sitting with customers, and determining yourself what they really wanted?
> If you push back on requirements when they are not reasonable. Etc
I did, because the requirements had a cost, which I had to balance with limited resources.
If widget A would make 10 customers happy, but would cost two weeks of work, that could be better spent making widget B that'd make 20 customers happy, then it would not be reasonable.
If widget A and B are free, then it becomes unreasonable to say no.
Yes. Every platform offers free tokens generously.
That is a true statement. Might not be much, but is enough for you to produce some code and shit out a readme and then show on hacker news that your capable of pushing to git with the help of llms
The zero cost argument is so hollow it’s got an echo.
The systems that allow this are notoriously underpriced, and yet nobody wants to confront the simple business logic that today’s free beer can’t and won’t last.
Elevating this phase as the normal state of LLM usage costs is plainly dumb and a terrible business take.
> The cost of turning written business logic into code has dropped to zero
It hasn't. Large enterprises currently footing the bill, essentially subsidizing AI for now.
I constantly see comparisons between the 200$ Claude-Code Max subscription vs 6-figure (100k$) salary of an engineer.
The comparison here is, first of all, not apples-to-apples. Let's correct CC subscription to the yearly amount first; 12x200=2400$. Still more than 10x difference compared to the human engineer.
Although when you have the human engineer, you also pay for the experience, responsibility, and you somewhat transfer liability (especially when regulations come into play)
Moreover, creation by a human engineer, unless stolen IP or was plagiarized, owned by you as the employer/company. Meanwhile, whatever AI generated is guaranteed to be somewhat plagiarized in the first place. The ownership of the IP is questionable.
This is like when a layman complains when the electrician comes to their house, identifies the breaker problem, replaces the breaker which costs 5$ and charges 100$ for 10-minute job. Which is complete underestimation of skill, experience, and safety. A wrong one may cause constant circuit-breaks, causing malfunction in multitude of electronic devices in the household. Or worse, may cause a fire. When you think you paid 100$ for 10-minutes, in fact it was years of education, exams, certification, and experience you had paid for your future safety.
The same principle applies to the AI. It seems like it had accumulated more and more experience, but failing at the first prompt-injection. It seems like getting better at benchmarks because they are now part of their dataset. All these are hidden-costs 99% does not talk about. All these hidden costs are liabilities.
You may save an engineer's yearly salary today, at the cost of losing ten times more to the civil-lawsuits tomorrow. (Of course, depending on a field/business)
If your business was not that critical to get a civil-lawsuit in the first place, then you probably didn't needed to hire an engineer yourself. You could hire an agency/contractor to do that in much cheaper way, while still sharing liability...
If written business logic can become code at near-zero cost, people with good product sense can build without buying a big engineering org. That pressure shows up first in commodity SaaS getting repriced (already happening). Meanwhile hyperscalers are leaning harder into being the platform layer by making huge investments, because if apps are easier to create, the durable value shifts to infrastructure, operations, and distribution. Great times.
A - the substitution via AI is good enough to equal everything a business needs to work properly and be sustainable long term. I would argue it does not. A business is NOT just its core tech element.
B - the cost will always remain subsidized by investors’ stupid money. It will not
For sure, because copy pasting from stackoverflow was already to difficult. Everyone loves writing business logic in computer understandable words. Name it LLM, programming language, calculator, there is still parser and interpreter. The only switch is attempt to replace deterministic machines with not deterministic machines by reducing machine error rate to acceptable percentage.
blogs like this seem like they’re in the right direction with LLMs being “here to stay” and a near indispensable part of people’s daily toolkit, but the near certainty that programming as a job or skillset is dead in the water seems just wrong?
like ok the cost for anyone to generate almost
always working code has dropped to zero but how does a lay person verify the code satisfies business logic? asking the same set to generate tests to that just seems to move the goalposts
or like what happens when the next few years of junior engineers (or whatever replaces programming as a field)who’ve been spoon fed coding through LLMs need to actually decipher LLM output and pinpoint something the machine can’t get right after hours of prompting? a whole generation blindly following a tool they cant reliably control?
but maybe I am just coping because it feels like the ladder on the rest of my already short career , but some humility m
There is something poignant about the change taking place in the software industry occurring alongside mass deportations. I can’t put my finger on just what it is that makes it so...
The Pollyanas have a point but overstate it. The naysayers should be more cautious though.
If you're a professional code producer, you shit out code as fast as possible. Don't give anyone time to analyze the disgusting pile of shit you generated, just shit out code as fast as you can and call it a win! Who would prove you wrong?
Would someone waste their biological precious resources reviewing machine generated slop, when your cadence is super human?
Would someone use the same machine you used to evaluate itself? Lol
Saying it again, I think we're in need of a moratorium on "AI Has Changed The World Forever" posts. All of them read the same and offer nothing past "I asked a LLM to make a midsize feature, I haven't looked at the code but it compiles on my machine and that should terrify you" - buddy, we've had people pushing code that compiles on their machine and occasionally goes quickly read or unread in PRs, that terrifies me now.
When these vibe coded projects realise that maintenance, security updates, API changes are still needed. Get ready for a massive swing back to senior software developers being in demand.
Playing software maintainer while many vibe coded web apps aren't built with proper software architecture or practices only makes the swing back to senior engineers being in demand a possibility.
Good luck to those who are building 600K-LOC vibe coded web apps with 40+ APIs stitched together.
I personally love this development. Sure, I find some pleasure in writing code. But what I love most is mapping out a gnarly problem on pen and paper. Then the code is "just" an implementation detail. Guess I'm an ideas guy as per the author?
I predict it's going to be a bloodbath. People who worked for Big Tech have no idea what's coming. Some of us software engineers who have been outside have been experiencing issues for almost a decade. The industry is extremely anti-competitive.
Whatever you produce, nobody is going to use unless you produce it under the banner of Big Tech. There are no real opportunities for real founders.
The problem is spreading beyond software. The other day, I found out there is a billion dollar company whose main product is a sponge... Yes, a sponge, for cleaning. We're fast moving towards a communist-like centrally planned economy, but with a facade of meritocracy where there is only one product of each kind and no room for competition.
This feeling of doom that software engineers started to feel after LLMs is how I was feeling 5 years earlier. People are confused because they think the problem is that AI is automating them but reality is that our problems arise from a deeper systemic issue at the core of our economic system. AI is just a convenient cover story, it's not the reason why we are doomed. Once people accept that we can start to work towards a political solution like UBI or better...
We've reached the conclusion of Goodhart's Law "When a measure becomes a target, it ceases to be a good measure" - Our economic system has been so heavily monitored and controlled in every aspect that is has begun to fail in every aspect. Somebody has managed to profit from every blindspot, every exploit exposed by the measurement and control apparatus. Everything is getting qualitatively worse in every way that is not explicitly measured and the measurement apparatus is becoming increasingly unreliable... Most problems we're experiencing are what people experienced during the fall of communism except filter bubbles are making us completely blind to the experience of other people.
I think if we don't address the root causes, things will get worse for everyone. People's problems will get bigger, become highly personalized, pervasive, inexplicable, unrelatable. Everyone will waste their energy trying to resolve their own experience of the symptoms but the root causes would remain.
To me it seems like the big question for the future will be how to achieve political relevance as "the little guy". It seems like with LLMs the typical "get educated" pathway for the lower class is closing quick. I dread to think of a world where large portions of society are essentially "useless".
What went viral?
To me it just seems like people are pretty divided on the topic which makes sense as it’s an emerging technology. I feel I see as many posts against AI as glazing it.
stephenlf|23 days ago
Didn’t realize this was science fiction.
geetee|23 days ago
AstroBen|23 days ago
I've seen non-technical people vibe code with agents. They're capable of producing janky, very basic CRUD apps. Code cost definitely ain't zero
KronisLV|23 days ago
Compared to my salary, the current cost of the models and tokens to do the work I normally would, is around 10%-25% of it.
Obviously, you still need someone to prevent the models from going insane and messing everything up, but in my experience (webdev projects, DevOps stuff, local software, well known domains), it is very much a force multiplier, as long as you acknowledge that you really need tests and various prebuild scripts.
So I predict one of two things happening:
Either way, it's an interesting direction - instead of ever becoming "proper" engineering (outside of RFCs and foundational stuff), we went from React/Vue/Angular/Svelte/Express.js/Laravel/Django/Rails/ASP.NET/Spring wild west and frameworks of the day (never being able to nail down what "good practices" are and stick to them for decades, but chasing the new thing forevermore), to even closer to producing non-deterministic slop, except that the slop kinda sorta works. Wild times.bopbopbop7|23 days ago
And how much is technical debt worth?
heliumtera|23 days ago
They never refuse. Worst case scenario the good models ask for clarification.
The cost for producing code is zero and code producers are in a really bad spot.
mohsen1|23 days ago
I think builders are gonna be fine. The type of programmer were people would put up with just because they could really go in their cave for a few days and come out with a bug fix that nobody else on the team could figure out is going to have a hard time.
Interestingly AI coding is really good at that sort of thing and less good at fully grasping user requirements or big picture systems. Basically things that we had to sit in meetings a lot for.
ericpauley|23 days ago
pointlessone|23 days ago
Look at recent examples of browsers and matrix servers. AI can’t even follow extremely detailed specs with extensive test suites.
If anything, nice and friendly but mediocre devs are in more immediate danger than rough but extremely competent devs.
But we’ve seen C-suits losing institutional knowledge at a drop of a hat for decades so who knows? Maybe knowledge and skill are not that valued.
wiseowise|23 days ago
Amen. It was a good time while it lasted.
oytis|23 days ago
falloutx|23 days ago
diob|23 days ago
So the type of programmers you're talking about, who could solve complex problems, are actually just enhanced by it.
ratrace|23 days ago
This is the exact type of programmer that isn't going to have any issues - ones who actually know what they're doing and aren't just going to vibecode react slop.
ossa-ma|23 days ago
This is the time for bold predictions, you’ve just told us we’re in a crucible moment yet you end the article passively….
vagab0nd|23 days ago
YZF|23 days ago
- Small companies using AI are going to kick the sh*t out of large companies that are slow to adapt.
- LLMs will penetrate more areas of our lives. Closer to the STTNG computer. They will be agents in the real life sense and possibly in the physical world as well (robots).
- ASICs will eat nVidia's lunch.
- We will see an explosion of software and we will also see more jobs for people who are able to maintain all this software (using AI tools). There is going to be a lot more custom software for very specific purposes.
jdjdndbdhsjsb|23 days ago
2026 is the year where we all realise that we can be our own company and build the stuff in our dreams rather than the mundane crap we do at work.
Honestly I am optimistic about computing in general. Llms will open things up for novices and experts alike. We can move into I the fields where we can use our brain power... But all we need is enough memory and compute to control our destiny....
monero-xmr|23 days ago
Furthermore if we were truly in the utopia the author describes, why do all the LLM companies employ (and pay top dollar) for so many engineers? Why does OpenAI pay for slack when they could just vibe code a chat app in an hour?
The challenge of building a real, valuable software business (or any business) is so much harder than using LLM to prompt “build me a successful software business”
bopbopbop7|23 days ago
So far it's just AI doom posting, hype bloggers that haven't shipped anything, anecdotes without evidence, increase in CVEs, increase in outages, and degraded software quality.
oytis|23 days ago
I only have anecdotal evidence from some engineers I know that they don't write software by hand any more. Provided the software they are working on was useful before, we can say that LLMs are writing useful software now.
singpolyma3|23 days ago
If you are part of the requirements process. If you find problems to solve and solve them. If you push back on requirements when they are not reasonable. Etc. Then you still have a career and I don't see anything coming for you soon.
lbreakjai|23 days ago
So, in your entire career, you've always worked in companies where you were a subject matter expert on everything the company did? Always knew the business domain inside out? You were running the numbers, sitting with customers, and determining yourself what they really wanted?
> If you push back on requirements when they are not reasonable. Etc
I did, because the requirements had a cost, which I had to balance with limited resources.
If widget A would make 10 customers happy, but would cost two weeks of work, that could be better spent making widget B that'd make 20 customers happy, then it would not be reasonable.
If widget A and B are free, then it becomes unreasonable to say no.
haolez|23 days ago
Xiol|23 days ago
Tokens are free now?
heliumtera|23 days ago
That is a true statement. Might not be much, but is enough for you to produce some code and shit out a readme and then show on hacker news that your capable of pushing to git with the help of llms
TheCoreh|23 days ago
camillomiller|23 days ago
pvtmert|23 days ago
It hasn't. Large enterprises currently footing the bill, essentially subsidizing AI for now.
I constantly see comparisons between the 200$ Claude-Code Max subscription vs 6-figure (100k$) salary of an engineer.
The comparison here is, first of all, not apples-to-apples. Let's correct CC subscription to the yearly amount first; 12x200=2400$. Still more than 10x difference compared to the human engineer.
Although when you have the human engineer, you also pay for the experience, responsibility, and you somewhat transfer liability (especially when regulations come into play)
Moreover, creation by a human engineer, unless stolen IP or was plagiarized, owned by you as the employer/company. Meanwhile, whatever AI generated is guaranteed to be somewhat plagiarized in the first place. The ownership of the IP is questionable.
This is like when a layman complains when the electrician comes to their house, identifies the breaker problem, replaces the breaker which costs 5$ and charges 100$ for 10-minute job. Which is complete underestimation of skill, experience, and safety. A wrong one may cause constant circuit-breaks, causing malfunction in multitude of electronic devices in the household. Or worse, may cause a fire. When you think you paid 100$ for 10-minutes, in fact it was years of education, exams, certification, and experience you had paid for your future safety.
The same principle applies to the AI. It seems like it had accumulated more and more experience, but failing at the first prompt-injection. It seems like getting better at benchmarks because they are now part of their dataset. All these are hidden-costs 99% does not talk about. All these hidden costs are liabilities.
You may save an engineer's yearly salary today, at the cost of losing ten times more to the civil-lawsuits tomorrow. (Of course, depending on a field/business)
If your business was not that critical to get a civil-lawsuit in the first place, then you probably didn't needed to hire an engineer yourself. You could hire an agency/contractor to do that in much cheaper way, while still sharing liability...
zb3|23 days ago
Then go and throw your $0 to fix some real bugs on GitHub.. really, if AI works so well, why are all those issues still open?
Look, almost 2K issues open here: https://github.com/emscripten-core/emscripten/issues
If AI really works like non-technical people think it does, why doesn't Google just throw their AI tool to fix them all?
Bhonnegowda|23 days ago
camillomiller|23 days ago
A - the substitution via AI is good enough to equal everything a business needs to work properly and be sustainable long term. I would argue it does not. A business is NOT just its core tech element.
B - the cost will always remain subsidized by investors’ stupid money. It will not
szczepano|23 days ago
ausbah|23 days ago
like ok the cost for anyone to generate almost always working code has dropped to zero but how does a lay person verify the code satisfies business logic? asking the same set to generate tests to that just seems to move the goalposts
or like what happens when the next few years of junior engineers (or whatever replaces programming as a field)who’ve been spoon fed coding through LLMs need to actually decipher LLM output and pinpoint something the machine can’t get right after hours of prompting? a whole generation blindly following a tool they cant reliably control?
but maybe I am just coping because it feels like the ladder on the rest of my already short career , but some humility m
hippo22|23 days ago
tolerance|23 days ago
The Pollyanas have a point but overstate it. The naysayers should be more cautious though.
lefrenchy|23 days ago
falloutx|23 days ago
Zero if you dont consider Anthropic's API pricing, the prompter's hourly rate and verification bottleneck.
heliumtera|23 days ago
Verification? LoL, lmao even. Your vibes are low.
If you're a professional code producer, you shit out code as fast as possible. Don't give anyone time to analyze the disgusting pile of shit you generated, just shit out code as fast as you can and call it a win! Who would prove you wrong?
Would someone waste their biological precious resources reviewing machine generated slop, when your cadence is super human? Would someone use the same machine you used to evaluate itself? Lol
PostOnce|23 days ago
However, let's suppose the alternate case:
If AI works as claimed, people in their tens of millions will be out of work.
New jobs won't be created quickly enough to keep them occupied (and fed).
Billionaires own the media and the social media and will use it to attempt to prevent change (i.e. apocalyptic taxation)
What, then, will those people do? Well, they say "the devil makes work for idle hands", and I'm curious what that's going to look like.
rc-1140|23 days ago
kgraves|23 days ago
Playing software maintainer while many vibe coded web apps aren't built with proper software architecture or practices only makes the swing back to senior engineers being in demand a possibility.
Good luck to those who are building 600K-LOC vibe coded web apps with 40+ APIs stitched together.
pvtmert|23 days ago
I even expect "vibe-code-scalers" will come soon, to be able to fix and scale up the spaghetti AI agents plopped in the first place.
The author seems to be an Amazonian, it also seems that they are good at "Invent", but not at "Simplify" bit.
Big-Tech has invented LLMs, that is great. Big-Tech hasn't been great when it comes to "Simplify"-ing things. Actually, notoriously bad at it.
That is the opportunity here; "Simplifying" these workflows, making AaaS (Agent as a Service or AI as a Service)
aleda145|23 days ago
jongjong|23 days ago
Whatever you produce, nobody is going to use unless you produce it under the banner of Big Tech. There are no real opportunities for real founders.
The problem is spreading beyond software. The other day, I found out there is a billion dollar company whose main product is a sponge... Yes, a sponge, for cleaning. We're fast moving towards a communist-like centrally planned economy, but with a facade of meritocracy where there is only one product of each kind and no room for competition.
This feeling of doom that software engineers started to feel after LLMs is how I was feeling 5 years earlier. People are confused because they think the problem is that AI is automating them but reality is that our problems arise from a deeper systemic issue at the core of our economic system. AI is just a convenient cover story, it's not the reason why we are doomed. Once people accept that we can start to work towards a political solution like UBI or better...
We've reached the conclusion of Goodhart's Law "When a measure becomes a target, it ceases to be a good measure" - Our economic system has been so heavily monitored and controlled in every aspect that is has begun to fail in every aspect. Somebody has managed to profit from every blindspot, every exploit exposed by the measurement and control apparatus. Everything is getting qualitatively worse in every way that is not explicitly measured and the measurement apparatus is becoming increasingly unreliable... Most problems we're experiencing are what people experienced during the fall of communism except filter bubbles are making us completely blind to the experience of other people.
I think if we don't address the root causes, things will get worse for everyone. People's problems will get bigger, become highly personalized, pervasive, inexplicable, unrelatable. Everyone will waste their energy trying to resolve their own experience of the symptoms but the root causes would remain.
IhateAI_2|23 days ago
[deleted]
kykat|23 days ago
tptacek|23 days ago
pousada|23 days ago