Many HN commentators went through the same thing over the last 3 years. You'd find plenty of skeptics in 2023 and 2024 comments. First half of 2025 was the anger stage. Later half of 2025 was full on bargaining stage when models like GPT5.2 and Opus 4.5 were released. In 2026, people are in depression stage.
I don't think most devs will go into acceptance stage until later this year when Blackwell-class models come online and AI undeniably write better code than vast majority of humans. I'm pretty sure GPT5.2 and Opus 4.5 were only trained on H200-class chips.
Edit: Based on comments here, it seems like HN is still mostly at the anger stage.
> I don't think most devs will go into acceptance stage until later this year when Blackwell-class models come online and AI undeniably write better code than vast majority of humans. I'm pretty sure GPT5.2 and Opus 4.5 were only trained on H200-class chips.
We can only hope! It's about time all those pompous developers embrace the economic rug-pull, and adopt a lifestyle more in line with their true economic value. It's capitalism people, the best system there is. Deal with it and quit whining.
Edit: The comment I am replying to was rewritten completely, and originally asserted that the quality of LLMs was now undeniable.
"Undeniably"? I will deny that they are good. I try to use LLMs on a near-daily basis and find them unbearably frustrating to use. They cannot even adequately complete instructions like "following the pattern of A, B, and C in the existing code, create X Y and Z functions but with one change" reliably. This is a given; the work I do is outside the training dataset in any meaningful sense, so their next-token-prediction is statistically going to lean away from predicting whatever I'm doing, even if RL training to "follow instructions" is marginally effective.
The conclusion I've come to is that the 10x hypebots fall into two categories. The first is hobbyists who could barely code at all, and now they are 10x productive at producing very bad software that is not worth sharing with the world. The other category is people who use LLMs to launder code from the training dataset to wash it free of its licenses. If your use case is reproducing code it has already been trained on, it can do that quickly.
These claims of "holding it wrong", one of which I already see in the replies, are fundamentally preposterous. This is the revolution that is democraticising software engineering for anyone who can write natural language, yet competent software engineers are using it wrong? No, the reality is that it simply doesn't have that level of utility. If it did, we would be seeing an influx of excellent software worthy of widespread usage that would replace much of the existing flawed software in the world, if not pushing new boundaries altogether. Instead we get flooded with ShowHNs fit for the pig trough.
That's not to say LLMs have zero utility. They can obviously generate a proof-of-concept quickly, and if the task is trivial enough, save a couple of minutes writing a throwaway script that you actually use day-to-day. I find them to be somewhat useful for retrieving information from documentation, although some of this gain is offset by the time wasted from hallucinated APIs. But I would estimate the productivity gains at 5%, maybe. That gain is hardly worth the accelerating AI psychosis gripping society and flooding the internet with garbage that drowns out the worthwhile content.
Addendum: Now that your post has been rewritten to assert that no, LLMs aren't there yet, but surely in the next 6 months, this time for sure it'll be AGI... welcome to the bubble. I've been told that AGI is coming in a couple of months every month for the past two years. We are no closer to it than we were two years ago. The improvements have been modest and there are clearly diminishing returns on investing in exponential scaling, not to mention that more scaling can never solve the fundamental architectural flaws of LLMs.
> Writing code isn't where I bring the most value. Understanding business problems, analyzing trade-offs, and making sure we're building the right things is where I can put all those years to good use. It might sound like an obvious thing, but it took me a while to get to this point.
Reaching this epiphany is a major milestone in the career of an SE even before the days of LLMs. That's basically the crux of it.
The vast majority of developers are not in roles where decisions at that level are being made (except occasionally, on a smaller scope), so their ability in that context is irrelevant. You're describing project leads and department leads.
Based on my 20 years of experience, the vast majority of developers do not possess those skills.
I'd guess that only 10% of them actually do. In order to have those skills, you need good user sense, good business sense, good negotiation skills, good communication skills. These skills align more with the product manager to be frank.
Of course, the best people are still going to be those who have the technical chops and business sense. They'll be amplified more in this era.
I just tweeted the exact same thought a few days ago, I guess we're all going through the same journey right now.
When GPT3 was opened to researchers 4-5 years ago, a friend of mine had access and we tried some stuff together; I was blown away that it could translate code it hasn't seen between programming languages, but it seemed to be pretty bad at it at the time. I did not expect coding to be the killer app of LLMs but here we are.
No. Bringing up stages of grief in a debate (rather than an account of personal experience, like in the post) is an argument-killer, because any negative response from the alleged grieving side is instantly taken down by smugly categorizing their negativity as a stage of grief. It's not just reserved to LLM arguments too, this is a common wrapper for the less dignified "you disagree with me which proves I'm right" position.
If the only way to advance my career was to talk into a chatbox that makes shit up and encourages people to kill themselves i would stop using computers to spend my days picking oranges. i guess some people feel differently.
>What I came to realize as I began using these tools more is that I was entirely wrong about feeling like my skills would become useless. They don't replace all the experience and knowledge I've accumulated in over two decades as a developer, and instead they enhance what I could do.
What you determine to be denial depends only on what you think is inevitable. OP said "my value is in performing more advanced functions that aren't just writing code", and to you it's denial because (from the implication) you think the complete elimination of software engineers as a job is inevitable. If OP said "my value is that I am multifunctional and can pivot to a completely different industry of mental labor" some people would call it denial because they think all those jobs are next in line on the chopping block. If OP said "my value is in being able to perform physical labor for cheap" some people would call it denial because robotics is progressing rapidly. And so on.
aurareturn|28 days ago
I don't think most devs will go into acceptance stage until later this year when Blackwell-class models come online and AI undeniably write better code than vast majority of humans. I'm pretty sure GPT5.2 and Opus 4.5 were only trained on H200-class chips.
Edit: Based on comments here, it seems like HN is still mostly at the anger stage.
rootnod3|28 days ago
Not even starting with how it just “fixes” a hug by introducing a wholly new one and then re-introducing the old one when pointing it out.
visarga|28 days ago
happytoexplain|28 days ago
palmotea|28 days ago
We can only hope! It's about time all those pompous developers embrace the economic rug-pull, and adopt a lifestyle more in line with their true economic value. It's capitalism people, the best system there is. Deal with it and quit whining.
anonymous908213|28 days ago
"Undeniably"? I will deny that they are good. I try to use LLMs on a near-daily basis and find them unbearably frustrating to use. They cannot even adequately complete instructions like "following the pattern of A, B, and C in the existing code, create X Y and Z functions but with one change" reliably. This is a given; the work I do is outside the training dataset in any meaningful sense, so their next-token-prediction is statistically going to lean away from predicting whatever I'm doing, even if RL training to "follow instructions" is marginally effective.
The conclusion I've come to is that the 10x hypebots fall into two categories. The first is hobbyists who could barely code at all, and now they are 10x productive at producing very bad software that is not worth sharing with the world. The other category is people who use LLMs to launder code from the training dataset to wash it free of its licenses. If your use case is reproducing code it has already been trained on, it can do that quickly.
These claims of "holding it wrong", one of which I already see in the replies, are fundamentally preposterous. This is the revolution that is democraticising software engineering for anyone who can write natural language, yet competent software engineers are using it wrong? No, the reality is that it simply doesn't have that level of utility. If it did, we would be seeing an influx of excellent software worthy of widespread usage that would replace much of the existing flawed software in the world, if not pushing new boundaries altogether. Instead we get flooded with ShowHNs fit for the pig trough.
That's not to say LLMs have zero utility. They can obviously generate a proof-of-concept quickly, and if the task is trivial enough, save a couple of minutes writing a throwaway script that you actually use day-to-day. I find them to be somewhat useful for retrieving information from documentation, although some of this gain is offset by the time wasted from hallucinated APIs. But I would estimate the productivity gains at 5%, maybe. That gain is hardly worth the accelerating AI psychosis gripping society and flooding the internet with garbage that drowns out the worthwhile content.
Addendum: Now that your post has been rewritten to assert that no, LLMs aren't there yet, but surely in the next 6 months, this time for sure it'll be AGI... welcome to the bubble. I've been told that AGI is coming in a couple of months every month for the past two years. We are no closer to it than we were two years ago. The improvements have been modest and there are clearly diminishing returns on investing in exponential scaling, not to mention that more scaling can never solve the fundamental architectural flaws of LLMs.
julienchastang|28 days ago
Reaching this epiphany is a major milestone in the career of an SE even before the days of LLMs. That's basically the crux of it.
happytoexplain|28 days ago
aurareturn|28 days ago
I'd guess that only 10% of them actually do. In order to have those skills, you need good user sense, good business sense, good negotiation skills, good communication skills. These skills align more with the product manager to be frank.
Of course, the best people are still going to be those who have the technical chops and business sense. They'll be amplified more in this era.
ChipopLeMoral|28 days ago
When GPT3 was opened to researchers 4-5 years ago, a friend of mine had access and we tried some stuff together; I was blown away that it could translate code it hasn't seen between programming languages, but it seemed to be pretty bad at it at the time. I did not expect coding to be the killer app of LLMs but here we are.
Yossarrian22|28 days ago
tavavex|27 days ago
Kapura|28 days ago
weeznerps|28 days ago
visarga|28 days ago
catigula|28 days ago
FYI this is the denial stage.
tavavex|27 days ago
aurareturn|28 days ago