Its worth mentioning that this essay has some signs of being either partially AI generated or heavily edited through an LLM. Some of the signs are there (It's not X, it's Y), With the blog having gone from nearly zero activity between 2015 and 2025 to have it explode in posts and text output since then also raises an eyebrow.
As someone who has written a few deeply personal articles with LLM assistance, I see the signs and I'm almost certain this was generated off a few bullet points. The repetition and cadence strongly resembles the LLM output. Its the kind of fluff that I remove from a piece, because it lacks humanity and offers little substance.
The comments as well. I won't give away the tells but HN is less and less pleasant to read. Now is the time to cherish your pockets of small scale high quality forums that's not flooded by this stuff yet.
This was my thought after getting through a few paragraphs as well. At first, I was thinking, this is interesting, maybe worth sharing with colleagues. But then it became too obvious it was AI written or "assisted". Can't take that seriously.
LLMs write this way because people write this way. Maybe not everyone, but enough for it to train the models to do it. Much of my writing reads like an LLM wrote it, but that doesn't make me an LLM.
Why is this sentiment expressed so often ("It was written/edited by AI"?
It seems to bother people, perhaps since it may have been low-effort.
Doesn't it not matter as long as the content is good? Otherwise, it seems to be no different than a standard low-quality post.
It is almost 90% generated using AI text. So many paragraphs to say basically nothing at all.
Like look at this paragraph:
> Junior engineers have traditionally learned by doing the simpler, more task-oriented work. Fixing small bugs. Writing straightforward features. Implementing well-defined tickets. This hands-on work built the foundational understanding that eventually allowed them to take on more complex challenges.
The first sentence was enough to convey everything you needed to know, but it kept on adding words in that AI cadence. The entire post is filled with this style of writing, which, even if it is not AI, is extremely annoying to read.
I feel like it's such a lack of self respect and respect for others when people write using AI on personal blogs.
Reading AI code is very pleasant. It's well annotated and consistent - how I like to read code (although not how I write code LOL). Reading language/opinions is not meant to be this way. It becomes repetitive, boring, and feels super derivative. Why would you turn the main way we communicate with each other into a soulless, tedious, chore?
I think with coding it's because I care* about what the robot is doing. But, with communication, I care about what the person is thinking in their mind, not through the interpretation of the robot. Even if the person's mind isn't as strong. At least then I can size the person up - which is the other reason understanding each other is important and ruined when you put a robot in between.
It's funny how seemingly easy it is to tell articles like this have that AI generated whiff to them. The first bit that raised my suspicion was the "The Identity Crisis Nobody Talks About" headline. This "The x nobody talks about" feels like such a GenAI thing.
I'm not opposed to AI generated text in principle. But not knowing how it was written is problematic, because it can change the meaning of the text. Take this paragraph for instance:
"From my experience building and scaling teams in fintech and high-traffic platforms, I can tell you that role expansion without clear boundaries always leads to the same outcome: people try to do everything, nothing gets done with the depth it requires, and burnout follows."
This reads like a first person account of someone's experience. Is it though? If it's nobody's experience then it robs this text of its meaning. If it is somebody's experience and that person used AI to improve their style then that's absolutely fine with me.
I would prefer to have the prompt he used to generate the article. Similarly, for compiled binaries, I would rather have the source code that produced them, instead of just an .exe file.
One problem I have seen IRL is AI deployment mistakes and IMO Vibe Coders need an IT/Dev Father Figure type to avoid these simple mistakes. Here is one example:
A surgeon (no coding experience) used Claude to write a web app to track certain things about procedures he had done. He deployed the app on a web hosting provided (PHP LAMP stack). He wanted to share it with other doctors, but wasn't sure if it was 'secure' or not. He asked me to read the code and visit the site and provide my opinion.
The code was pretty reasonable. The DB schema was good. And it worked as expected. However, he routinely zipped up the entire project and placed the zip files in the web root and he had no index file. So anyone who navigated to the website saw the backups named Jan-2026.backup, etc. and could download them.
The backups contained the entire DB, all the project secrets, DB connection strings, API credentials, AWS keys, etc.
He had no idea what an 'index' file was and why that was important. Last I heard he was going to ask Claude how to secure it.
> Here is something that gets lost in all the excitement about AI productivity: most software engineers became engineers because they love writing code.
1) I guess I am not included in the set named "most software engineers."
2) If the title is "Software Engineer," I think I should be engineering, not coding.
This has probably been beaten to death, but I think this is the biggest disciminating question between "pro ai" and "against ai" in the software world is: "Dp you do (this) becuase you like writing code, or because you like building things for the world?"
Of course I don't think it's a binary decision.
Although I more more motivated by building things, I do somewhat miss the programmer flow state I used to get more often.
It's a different skillset and way of thinking. Engineers tend to think vertically deep on technical problems. With AI, you have to think horizontally broad and vertically up on the architectural problem. The trick is to be comfortable relegating the details to AI.
One concrete example of this realization was when I was researching how to optimize my claude code environment with agents, skills, etc. I read a lot of technical documents on how these supplemental plugins work and how to create them. After an hour of reading through all this, I realized I could just ask Claude to optimize the environment for me given the project context. So I did, and it was able to point out plugins, skills, agents that I can install or create. I gave it permission to create them and it all worked out.
This was a case of where I should not think more technically deeper, but at a more "meta" level to define the project enough for Claude to figure out how to optimize the environment. Whether that gave real gains is another question of course. But I have anecdotally observed faster results and less token usage due context caching and slightly more tools-directed prompts.
The post is right superficially. It made being an engineer harder because it took away the easy parts that anyone can do and it forces engineers to think of the hard ones.
No jobs get easier with automation - they always move a step up in abstraction level.
An accountant who was super proficient in adding numbers no longer can rely on those skills once calculator was invented.
This is the key. I haven't found that things have become harder. The hard parts are still hard, and those have been the most important and prominent parts of my job once I reached a certain level.
I disagree on making it easier. I'm very capable of writing code in multiple languages but it's boring and monotonous. It's getting in the way of me building the system I have in mind. I prefer the engineering (design) to writing. If I can describe my system design to something (a junior developer or an AI) and see it come to life quickly, that's great; it lets me spend more time on designing the system, or perhaps designing more systems.
That said, there are plenty of amateurs who find coding to be approachable and system design to me daunting. For them, eliminating coding and moving the focus to system design would be a nightmare.
> The post is right superficially. It made being an engineer harder because it took away the easy parts that anyone can do and it forces engineers to think of the hard ones.
I dunno about that. Look at blogging as an example - AI took away the "easy"[1] part of blogging, and now we are left with 90% crap AI-generated "articles" like the one you just read.
I feel it's the other way around - AI took away the hard parts, of both blogging and programming, and now what have to look forward to every single damn day is a deluge of AI slop of absolutely poor quality.
Continuing with the literature analogy (because this article was written by an AI), adding AI as a tool for authors isn't producing the next Terry Pratchett quicker, it's delaying the production of the next Terry Pratchett because the next Terry Pratchett will be drowned out by an unstoppable volume of AI slop.
After all, if you can't recognise obvious AI blog posts, what makes you think you can recognise poor code?
---------------------
[1] I am using the term as you are using it. I don't really believe that it took away the easy part.
It might be worth mentioning studies that show the lack of productivity gains from LLM usage. These posts take it as an unequivocal given. Management might still have the expectations that certain tasks are faster. But they aren’t always connected to reality because they’re not thinking as engineers.
There's nothing new about this pattern. When the tractor was invented, the farmer didn't get to knock off early. He just started producing 10x more. Then the tractors got bigger and more powerful, and the things you used them with got more sophisticated too and suddenly you're producing 100x more.
Prompt: "The text of the blog article is copy-pasted below. Please summarize it in one parargaph."
Result: "The blog article explores the paradox that while AI has made coding easier by automating many tasks, it has also made the role of software engineers significantly harder. AI tools have raised expectations for productivity, leading to increased workloads and burnout without corresponding support or recognition. Engineers, especially juniors, are now expected to take on broader responsibilities—ranging from product thinking to deployment—while the hands-on act of writing code has diminished. This shift in roles and expectations is often unacknowledged by leadership, creating a disconnect between those who use AI tools and those who manage the work. The article emphasizes the growing complexity of the engineer's job, the burnout that results from unrealistic demands, and the identity crisis many engineers face as their core role changes. It calls for better leadership, structured training, and realistic expectations to avoid the erosion of trust and talent in the industry."
Whatever. Blog posts about AI that dance around the topic are not that valuable. For a project, you'll hire somebody, not some AI (as bcachefs' author does). What matters is that you trust a person to do the job, and expecting them to churn code is useless. The real benefits of someone appear on their CV (experience, known technologies, human abilities -- communication, empathy, understanding).
AI generated content is an existential threat to human knowledge.
"the skills that the new engineering landscape actually requires: system design, architectural thinking, product reasoning, security awareness, and the ability to critically evaluate code they did not write."
These, surely, are the skills they always needed? Anyone who didn't have these skills was little more than a human chatgpt already, receiving prompts and simply presenting the results to someone for evaluation.
That’s a great point, but yes: a lot of devs were nothing more than a glorified LLM and during reviews were just an expensive linter. Reality is catching up to those.
This is what hapenned 10 years ago, when machine translation entered the professional translation business. Post-editing the translation was often slower than human translating sentences from scratch. Now nearly the whole industry is post-editing machine translations, and there is more and more content that is not even post-edited.
What I never enjoyed was looking up the cumbersome details of a framework, a programming language or an API. It's really BORING to figure out that tool X calls paging params page and pageSize while Y offset and limit. Many other examples can be added.
For me, I feel at home in so many new programming languages and frameworks that I can really ship ideas. AI really helps with all the boring stuff.
Same here. I like bringing ideas to life; code is just a means to an end. I can now give detailed designs to an AI and let it write the hundreds of lines of code in just minutes, and with far fewer typos than I would make. It's still not perfect - I have to review it all - but if I give it a proper spec in generally creates exactly what I had in mind.
Agree, it’s made programming so much fun. The other day I wrote a C# app just because it was the best language for the job, I’ve never touched .Net in my life. Worked great, clients loved it.
I can actually build nice UIs as a traditional ML engineer (no more streamlit crap). People are using them and genuinely impressed by them
I can fly through Rust and C++ code, which used to take ages of debugging.
The main thing that is clear to me is that most of the ecosystem will likely converge toward Rust or C++ soon. Languages like Python or Ruby or even Go are just too slow and messy, why would you use them at all if you can write in Rust just as fast? I expect those languages to die off in the next several years
I have a similar problem - AI is making building products easier, but it's made "shipping" a product 100x harder.
I was always a mediocre engineer, and stopping out on a personal usually happened bc "feature XYZ is way too hard to build and I won't spend another three weeks on it". Nowadays anything can be built in a couple of days, scope creep plus "would be cool if it could also do XYZ" makes it harder to walk away from a project and call it done.
But ofc these are personal projects, and I use them daily (like a personal workout system and tracker which I run w/ Claude Code, which love to call Claude Co-Workout). It doesn't "work" as a standalone app. It's mostly a "display system" for whatever CC outputs to me, so I can take the daily workout to the gym.
I got into software bc I liked to put out fun products and projects; I never really liked the process of writing software itself. But either way I'm still running into the "it's harder to put projects out than ever" dilemma, even though the projects are way easier to make, and higher quality than ever.
I'm wondering if it'd be fun to have a "Ask HN: Show us what you've build with (mostly) AI" thread?
Te be frank, a lot of companies don't need engineers. They need someone to do the jobs "quickly", "ASAP" and that's it. They are hiring coders masqueraded as programmers who masqueraded as engineers.
I'd say this -- if you really want to be a real engineer, you should avoid many career paths out there. Potentially ANY positions DIRECTLY facing business stakeholders is at best not a good choice, and at worst deprive your already remote chance to be a good engineer. The lower level you move into, the better, because the environment FORCES you to be a true engineer -- either you don't and fail, or you do and keep the job.
I get instantly turned off by a mere whiff of AI when reading something, and consequently I refuse to foist such garbage on my fellow human beings. But by god if I read another two-line (!!!) comment with an emdash on LinkedIn, I'm going to drop a bollock.
I've unfortunately stopped reading articles before reading comments here as it's all mostly garbage now. I'm not sure what people are trying to accomplish with generating blogs aside from either clout farming or marketing for their companies.
AI allows you to accelerate the initial build process, but I think engineering is all about craftsmanship. Today most LLMs have poor taste and chipping away the cruft matters more than ever.
The scenario I'm somewhat worried about is that instead of 1 PM, 1 designer and 5 developers, there will be 1 PM, 1 designer and 1 developer. Even if tech employment stays stable or even slightly increases due to Jevons paradox, the share of software developers in tech employment will shrink.
> Here is something that gets lost in all the excitement about AI productivity: most software engineers became engineers because they love writing code.
This resonates somewhat, but for a different reason. My mental model is that there are two kinds of developers, the craftsmen and the artists.
The artist considers the act of writing code their actual fulfillment. They thrive on beautifully written code. They are often attached to their code to a point where they will be hurt if someone criticizes (or even deletes) it.
The craftsman understands that code exists to serve a purpose and that is to make someone's life easier. This can be a totally non-technical customer/user that now can get their work done better. It could be another developer that benefits from using a library we wrote.
The artist hates LLMs as it takes away their work and replaces their works of beauty with generic, templatized code.
The craftsman acknowledges that LLMs are another tool in the toolbelt and using them will make them create more benefits for their customers.
For me, one thing that completely changed almost overnight was dealing with junior developers.
In the past, I would give them an assignment and they would take a few days to return with the implementation. I was able to see them struggling, they would learn, they would communicate and get frustrated by their own solution, then iterate.
Today, there are two kinds: 1) the ones who take a marginally smaller amount of time because they’re busy learning, testing and self reviewing, and 2) the ones who watch Twitch or Youtube videos while Claude does the job and come to me after two hours with “done, what’s next” while someone has to comb through the mess.
Leadership might see #2 and think they’re better, faster. But they are just a fucking boat anchor that drags down the whole team while providing nothing more than a shitty interface to an LLM in return.
The author introduces the term "Supervision Paradox", but IMHO this is simply one instance of the "Automation Paradox" [1], which has been haunting me since I started working in IT.
Interestingly, most jobs don't incentivize working harder or smarter, because it just leads to more work, and then burn-out.
Here is something that gets lost in all the excitement about AI productivity: most software engineers became engineers because they love writing code.
I think there's a big split between those who derive meaning and enjoyment from the act of writing code or the code itself vs. those who derive it from solving problems (for which the code is often a necessary byproduct). I've worked with many across both of these groups throughout my career.
I am much more in the latter group, and the past 12mo are the most fun I've had writing software in over a decade. For those in the first group, it's easy to see how this can be an existential crisis.
One supposition I see in this and so many other articles is that using AI to generate code results in not knowing how it works. I believe that's only true for "vibe coding", not for engineers using AI to generate code. The difference is in how much you plan, design, and specify upfront.
If you give an AI a very general prompt to make an app that does X, it could build that in any imaginable way. Someone who doesn't know how these things are done wouldn't understand what way was chosen and the trade-offs involved. If they don't even look at the code, they have no idea how it works at all. This is dangerous because they are entirely dependant on the AI to make good decisions and to make any changes in the future.
Someone who practices engineering by researching, considering their options, planning and designing, and creating a specification, leaves nothing up to chance. When the prompt is detailed, the outcome is constrained to the engineer's intent. If they then review the work by seeing that it wrote what they had in mind, they know that it worked and they know that the system design matches their own design. They know how it works because they designed it and they can modify that design. They can and have read the code so they can modify it without the help of the AI.
If you know what code you want generated, reviewing it is easy - just look and see if it's what you expected. If you didn't think ahead about what the code would look like, reviewing is hard because you have to start by figuring out what the codebase even does.
This goes the same for working in small iterations rather than prompting am entire application into existence. We all know how difficult it is to review large changes and why we prefer small changes. Those same rules apply for iterations regardless of whether it was written by a person or an AI.
AI code generation can be helpful if the engineer continues acting as an engineer. It's only when someone who isn't an engineer or when an engineer abdicates their responsibilities to the AI that we end up with an unmaintainable mess. It's no different than amateurs writing scripts and spreadsheets without a full understanding of the implications of their implementation. Good software comes from good engineering, not just generating code; the code is merely the language by which we express our ideas.
> One engineer captured this shift perfectly in a widely shared essay, describing how AI transformed the engineering role from builder to reviewer.
I stopped here. Was this written by an an LLM? This sentence in particular reads exactly like the author supplied said essay as context and this sentence is the LLM's summarization of it. Nowhere is the original article linked, either, further decreasing trust. Moreover, there's an ad at the bottom for some BS "talent" platform to hire the author. This article is probably an LLM generated ad.
My trust is vacated.
This makes me feel that the SWE work/identity crisis is less important than the digital trust crisis.
Ah, just in time summarizing what we went through recently. Our "leaders" officially added these to our already nonsensical list of goals.
A. Measurably demonstrate that atleast 50% of code/tests are AI generated.
B. X% Faster delivery timelines due to improved productivity tools.
You can't expect to make a pizza in 50% less time just because you bought a faster doughmaker. Specially when you don't even know whether the dough comes out under kneaded, over kneaded or as plain lumps!
I've always been motivated by making simple solid foundations in my code the fastest way possible.
So for me being able to have AI wrote certain things extremely fast with me just doing voice to text with my specific approach, is amazing.
I am all in on everything AI and have a discord server just for openclaw and specialized per repo assistants. It really feels like when I'm busy I can throw it an issue tracker number for things.
Then I will ssh via vs code or regular ssh which forwards my ssh key from 1password. My agents have read only repo access and I can push only when I ssh in. Super secure. Sorry for the tangent to the article but I have always loved coding now I love it even more.
In writing code, as in writing poetry, the mechanical labor is 5% writing, 45% editing, and 50% reading. But the only thing that makes it yours is you.
AI made it so individual developers can outsource their work, not just companies. Maybe there are some lessons to be learned from companies that manage outsourced work successfully.
I'm not sure if it's made engineering harder, but it's certainly changing what it means to be a good engineer. It's no longer just about writing code. Now it's increasingly about having good taste, making the right decisions, and sometimes just being blessed with the Midas touch.
In any case, I think we should start treating the majority of code as a commodity that will be thrown away sooner or later.
The role of an engineer is to produce software that probably works. If you are producing more bugs it's because you're skipping provability. AI is also really good at writing tests and doing test-driven development You can get 100% branching coverage. You use a secondary LLM to review the work and make sure everything follows best practices.
LLMs Can accelerate you if you use best practices and focus on provability and quality, but if you produce slop LLMs will help you produce slop faster.
This section very much resonated with me, even though I still haven't tried any of the AI tools:
... most software engineers became engineers because they love writing code. Not managing code. Not reviewing code. Not supervising systems that produce code. Writing it. The act of thinking through a problem, designing a solution, and expressing it precisely in a language that makes a machine do exactly what you intended. That is what drew most of us to this profession. It is a creative act, a form of craftsmanship, and for many engineers, the most satisfying part of their day.
Actually surprised none of the other comments have picked up on this, as I don't think it's especially about AI. But the periods of my career when I've been actually writing code and solving complicated technical problems have been the most rewarding times in my life, and I'd frequently work on stuff outside work time just because I enjoyed it so much. But the other times when I was just maintaining other people's code, or working on really simple problems with cookie-cutter solutions, I get so demotivated that it's hard to even get started each day. 100%, I do this job for the challenges, not to just spend my days babysitting a fancy code generation tool.
I feel like there's a market out there for a weekly newsletter that summarises all the AI takes like this and collects the one meaningful snippet of insight (if any)
> ...most software engineers became engineers because they love writing code. Not managing code. Not reviewing code. Not supervising systems that produce code. Writing it...
A SWE who bases their entire identity and career around only writing code is not an engineer - they are a code monkey.
The entire point of hiring a Software ENGINEER is to help translate business requirements into technical requirements, and then implement the technical requirements into a tangible feature or product.
The only reason companies buy software is because the alternative means building in-house, and for most industries software is a cost-center not a revenue generator.
I don't pay (US specific) 200K-400K TCs for code monkeys, I pay that TC for Engineers.
And this does a disservice to the large portion of SWEs and former SWEs (like me) who have been in the industry because we are customer-outcome driven (how do we use code to solve a tangible customer need) and not here to write pretty code.
You might be missing that a lot of companies are giddy that the mgmt can just vibe code stuff and there's no opportunity for engineers to be involved, (except for when it crashes?). I use AI tools and they are nice, but the mgmt are mostly not logical and need someone to sort through their bullshit.
While I agree with the thrust of the article: It would help if the article itself wasn't clearly at least partially LLM written. It has many of the shibboleths:
"This is not a minor adjustment. It is a fundamental shift in professional identity. "
"That is not empowerment. That is scope creep without a corresponding increase in compensation"
Honestly, it's lazy. At least edit the bloody thing.
I still feel like I'm writing code. I tell Claude what to write and I am very specific about it. There's still tons of problems for which Claude has no particular solution and it's on me and other humans to figure out what to do. For those cases where I tell it to just go off and write a whole script that I'm not even looking at, those are throwaway / low-value cases I dont care about where previously I'd not have even taken on that particular job.
Developers will become admins. Responsible for supervising and owning the outcomes of increasingly agentic engineering outputs. Trust is the most important thing in business and it’s worth more than ever.
> Why? Because the bottleneck was never typing code. It was always understanding the problem, making architectural decisions, debugging edge cases, and most importantly - knowing what NOT to build.
For me, this is a bit different. Writing code has always been the bottleneck. I get most of my joy out of solving edge cases and finding optimizations. My favorite projects are when I’m given an existing codebase with the task, “When mars and venus are opposite eachother, the code gets this weird bug that we can’t reproduce.”
When a project requires me to start from scratch, it takes me a lot longer than most other people. Once I’ve thought of the architecture, I get bored with writing the implementation.
AI has made this _a lot_ easier for me.
I think the engineers who thrive wi be the ones know when to use what tool. This has been the case before AI, AI is just another tool allowing more people to thrive.
I have my own side project that I vibe coded. I probably did what would take one team 6 montns and produced it myself in one month.
I'm not afraid of breaking stuff because it is only a small set of users. However for my own code for my professional job no way I would go that fast because I would impact millions of users.
It is insane that companies think they can replace teams wholesale while maintaining quality.
The issue is that before AI, 1% of the population was capable of creating 1 side project per year. After AI, 10% of the population is capable of creating 10 side projects per year. The competition grew by 100x. The pessimist in me thinks that the window of opportunity to create something successful is shrinking.
> I've shipped 7 side projects in the past year using AI heavily. But I've noticed something counterintuitive: the total time from idea to shipped product barely decreased.
> Why? Because the bottleneck was never typing code.
Were you also shipping side projects every 2 months before AI?
If not, this comment just reads like cognitive dissonance. Your core claim is that AI has enabled you to ship 7 projects in 12 months, which presumably was not something you did pre-AI, right? So the AI is helping ship projects faster?
I agree that AI is not a panacea and a skilled developer is required. I also agree that it can become a trap to produce a lot of bad code if you’re not paying attention (something a lot of companies are going to discover in 2026 IMO)
But I don’t know how you can claim AI isn’t helping you ship faster right after telling us AI is helping you ship faster.
>The engineers who thrive will be the ones who can resist the temptation to over-engineer when the marginal cost of adding complexity drops to near zero.
I think this isn't being discussed enough in the SWE world. It wasn't too long ago that engineers on HN would describe a line of code as "not an asset but a liability". Now that code is "free" though, I'm seeing more excessively verbose PRs at work. I'm trying to call it out and rein it in a bit but until engineers on average believe there is inherent risk here, the behavior will continue.
We need to have more metrics for this. Like I hear people making this claim on HN all the time as if they know absolutely for sure but I doubt it's this simple.
I can guarantee you this... the story is not absolute. Depending on who you are and what you need to work on dev time could be slower, same or faster for you. BUT what we don't know is the proportion. Is it faster for 60% of people? 70%, 80%?
This is something we don't know for sure yet. But i suspect your instinct is completely wrong and that 90% of people are overall faster... much faster. I do agree that it produces more bugs and more maintenance hurdles but it is that much faster.
The thing is LLMs can bug squash too. AND they are often much faster at it then humans. My agentic set up just reads the incoming slack messages on the issue, makes a ticket, fixes the code and creates a PR in one shot.
This piece hit something I've been trying to articulate for months.
The part about the identity shift from builder to reviewer - that's the real thing nobody's talking about. I spent years getting good at turning thoughts into code. That's a craft. There's a rhythm to it, a kind of flow state you hit when the problem and the solution start locking together.
Now I spend most of my time evaluating code I didn't write, catching issues I didn't create, in systems I didn't design. The volume is higher. The satisfaction is lower.
The HBR study numbers track with what I'm seeing around me. 83% saying AI increased their workload. That's not a bug, that's the whole point. We made code production faster, so now we produce more code. Nobody stopped to ask if that was actually the bottleneck worth solving.
The thing that gets me is the pretense. Everyone talks about AI making engineers more productive. But if you look at what's actually happening, we're not producing better software. We're just producing more of it, faster, with the same number of people. That's not productivity - that's volume.
What's being lost is the time to think. To sit with a problem long enough that you actually understand it before you start implementing. The old friction of writing code manually gave you that thinking time by default. Now you have to fight for it.
Well how many side projects did you ship last year? I’ve written small programs in the last few months over a weekend that would have taken me a month to do a couple years ago, and they’re better. Not in terms of code quality, but in terms of features I wanted and knew how to implement but couldn’t be bothered, Opus can do in one minute and even if it’s not the optimal implementation it’s completely functional, fine, and costs me almost nothing.
The new bottleneck is code ownership. You have to understand what it does and how it works to maintain it long term. You can LLM into a maintainability disaster but you can’t LLM out of it. Biting off more than you can chew is more dangerous than ever.
Fortunately, AI can also be used to reduce complexity. The case I noticed most often is to use the slightly more ugly API, or duplicate some generic code, but avoid pulling in a dependency. Examples are avoiding UI frameworks and directly accessing the DOM in simple web projects, using the CLI arg parser from the stdlib or adding simple helper functions rather than pulling in left-pad like dependencies.
Since managing dependencies is one of the major maintenance burdens in some of my projects (updating them, keeping their APIs in mind, complexity due to overgeneralization), this can help quite a lot.
When the goal is to ship (the result) I'll happily leverage LLM's to try an idea or 3 out. However, it wouldn't be fair to claim that my side projects have exactly one goal. That's why I choose to use AI generated code when I deal with stuff that I already know how to do, done a lot of times, and the only thing that I gain from using AI is time typing it out.
Anything else? I'll struggle and grow as a developer, thanks. And before anyone says "but there are architecture decisions etc. so you still grow"... those existed anyways. If I have to practice, I'll practice micro AND macro skills.
This tracks with the way a lot of heavily vibecoded projects have issues with beeing feature heavy, while those features often don't fully work and most importantly don't fit together cohesively. In other words, the quality is low.
I totally agree, except the more we get used to working with the tools the better and faster things will get. I would argue the field has been evolving fast in the past 3 years, but now it's showing signs of slowing down. And I think this is good, as it will allow people to catch up, and refine the approach to adapt better to the new paradigm of coding.
When I got to the part where it said that developers chose software engineering as a job because they like to code not because they want to review or "manage" code I really felt that. But while I enjoy coding & building as solo developer on my projects I can't really say I've ever enjoyed it as a job. Or are you not supposed to like your job? Is that how the world works?
My immediate reaction was, "Only 7?" but that may not be a fair thing to think, depending on what the constraints were.
The shift I've experienced is something akin to being able to finally focus on the aspects I've always enjoyed most: architecture and user experience. I review all the code, but through iteration my prompts have gotten better, and for the most part my automated codemonkey 'employee' produces good code. It's not reasonable to expect complex things to be one-shot; UX improvements always require follow-ups, and features need to be divided and conquered one at a time. Engineers who lack those higher level skills will struggle. You are leading a small team now, not just plugging away at implementing user stories.
> AI made me faster at producing code, but it also made me produce MORE code, which means more surface area for bugs, more maintenance burden, more complexity to reason about
I think from time to time, it's better to ask the AI whether the codebase could be cleaned and simplified. Much better if you use different AI than what you use to make the project.
They've always said you spend a lot more time reading code than writing it. If suddenly you're writing a lot more code, you're going to spend a ton more time reading it.
> Because the bottleneck was never typing code. It was always understanding the problem, making architectural decisions, debugging edge cases, and most importantly - knowing what NOT to build.
The AI can help you in these tasks too, but you need to ask for the help in terms that it can help you with, and not expect it to be genuinely intelligent or to have a crystal ball. As a bonus, once you've gotten these things into the agentic context, the code itself becomes better too.
You are putting sentences together just like an LLM would - quite fitting for an AI generated article. You might want to get it checked out, these days you never know if you are a real person or not.
I get benefits with AI both on the writing the code part and the understanding the problem part. If AI disappeared tomorrow I’d probably still enter “plan mode” in my head. I like having the discussion with the AI about requirements and edge cases and all that, while it updates the plan and documents architectural decisions in CLAUDE.md. I love that I can add extra polish, such as color to terminal output, or other random features that would have not made the cut before. Instead toiling on a random one off script to fix a problem I can have a whole CLI build that is a joy to use. Explaining complex architecture is easy now because instead of a boring EDD I can slop out animations that demonstrate data moving and transforming through a system.
As you mentioned, scope definition and constraints play a major role but ensuring that you don't just go for the first slop result but refine it pays off. It helps to have a very clear mental model of feature constraints that doesn't fall prey to scope creep.
I mean if you've built 7 side projects (and we assume it's the same phase since total time from idea to shipped product barely decreased), how are these things still a bottleneck to you? I'm assuming you're building in a domain/language you're comfortable with by now (unless you're crazy and try something fundamentally different on each of those shipped products).
Why will the 8th project still have those things as the bottleneck given your experience?
Also if you're not seeing any real gains in productivity, why are you using AI for your side projects and wasting tokens/money?
> The engineers who thrive will be the ones who can resist the temptation to over-engineer when the marginal cost of adding complexity drops to near zero.
One area --and many may not like that fact-- where it can help greatly is that the cost of adding tests also drops to near zero and that doesn't work against us (because tests are typically way more localized and aren't the maintenance burden production code is). And a some us were lazy and didn't like to write too many tests. Or take generative testing / fuzzy testing: writing the proper generators or fuzzers wasn't always that trivial. Now it could become much easier.
So we may be able to use the AI slop to help us have more correct code. Same for debugging edge cases: models can totally help (I've had case as simple as a cryptic error message which I didn't recognize: passed it + the code to a LLM and it could tell me what the error was).
But yup it's a given that, as you put it, when the marginal cost of adding complexity drops to near zero, we're opening a whole new can of worms.
TFA is AI slop but fundamentally it may not be incorrect: the gigantic amount of generated sloppy code needs to be kept in check and that's where engineering is going to kick in.
There's always a grain of truth in everything, but the recent article by the Redis guy (sorry for the lack of name) resonated more with me. It's correct that the load in other areas is increasing also because these tools are not there yet when it comes to for lack of a better word "good taste". I work with someone who hasn't written a line of code in a year and it shows and I'm about tired dealing with the slop. But also there's a bunch of things at work that you either did a million times already, aren't really challenging problems just annoying problems hard to solve because of all the cruft, a lot of boring manual work etc. and for this it's just an amazing help to the point I am more relaxed at work than I was previously. And when it does something that is not quite there, I can either fix it manually or tell it to fix it and it usually "gets it". Of course it it ultimately replaces me I will not be relaxed but that's a different topic.
Another little thing that resonated was a tweet that said "some will use it to learn everything and some so that they don't have to learn anything ". Of course it's not really a hard truth. It's questionable how much you can learn without really getting your hands dirty. But I do think people looking at it as a tool that helps then and/or makes them better will profit more than people looking to cut corners.
Not really, I disagree. The article did slightly touched on the real issue on why people enjoy writing code, a “craftsmanship”, yes, coding is NOT engineering, it is writing, and the people who enjoy doing it are actually writers not engineers, and I always keep mentioning that. With AI however, those writers have to be doing the engineering work: the goals, architecture design, managing blueprints, process design and refining, among many other things, and that job is not easy hence why engineers are “supposedly” paid well, AI now took the writing role, and you have to do the engineering one.
> Here is something that gets lost in all the excitement about AI productivity: most software engineers became engineers because they love writing code.
> Not managing code. Not reviewing code. Not supervising systems that produce code. Writing it. The act of thinking through a problem, designing a solution, and expressing it precisely in a language that makes a machine do exactly what you intended. That is what drew most of us to this profession. It is a creative act, a form of craftsmanship, and for many engineers, the most satisfying part of their day.
> Now they are being told to stop.
Yeah, so what I've been realizing from witnessing the Rise of the Agents™ is that there are tons of developers that actually don't like writing code and were in it for the money all along. Nothing wrong with money --- I love the green stuff myself --- but it definitely sucks to have their ambivalence (at best) or disdain (at worst) for the craft imposed on the rest of us.
Feel free to replace `writing code` for most work functions that are enjoyable for some that are being steamrolled by Big AI atm (writing, graphic design, marketing copy, etc.).
Spide_r|8 hours ago
thinkingemote|8 hours ago
That this kind of writing puts a great number of us off is not important to many who seek their fortune in this industry.
I hear the cry: "it's my own words the LLM just assisted me". Yes we have to write prompts.
rcvassallo83|7 hours ago
bonoboTP|5 hours ago
marginalia_nu|7 hours ago
jmcdl|7 hours ago
neogodless|7 hours ago
brobdingnagians|7 hours ago
RevEng|6 hours ago
apt-apt-apt-apt|5 hours ago
It seems to bother people, perhaps since it may have been low-effort. Doesn't it not matter as long as the content is good? Otherwise, it seems to be no different than a standard low-quality post.
agentultra|7 hours ago
I don’t think there will be a point in coming to this site if it’s just going to be slop on the front page all the time.
Maybe mods should consider a tag or flag for AI generated content submissions?
altmanaltman|8 hours ago
Like look at this paragraph:
> Junior engineers have traditionally learned by doing the simpler, more task-oriented work. Fixing small bugs. Writing straightforward features. Implementing well-defined tickets. This hands-on work built the foundational understanding that eventually allowed them to take on more complex challenges.
The first sentence was enough to convey everything you needed to know, but it kept on adding words in that AI cadence. The entire post is filled with this style of writing, which, even if it is not AI, is extremely annoying to read.
SecretDreams|7 hours ago
Reading AI code is very pleasant. It's well annotated and consistent - how I like to read code (although not how I write code LOL). Reading language/opinions is not meant to be this way. It becomes repetitive, boring, and feels super derivative. Why would you turn the main way we communicate with each other into a soulless, tedious, chore?
I think with coding it's because I care* about what the robot is doing. But, with communication, I care about what the person is thinking in their mind, not through the interpretation of the robot. Even if the person's mind isn't as strong. At least then I can size the person up - which is the other reason understanding each other is important and ruined when you put a robot in between.
383toast|4 hours ago
dom96|7 hours ago
I hate it. I couldn't read much more after that.
lezojeda|6 hours ago
[deleted]
jordanekay|6 hours ago
[deleted]
herodoturtle|3 hours ago
I see the post is even flagged now.
Irrespective of who wrote it or how it was written, the essay is packed with wisdom.
I’ve been programming for 30+ years and leading teams for the last 20 - and I found the essay deeply insightful.
I realise I’m a sample size of 1, but just figured I’d comment here to advocate against this post being flagged. Surprised that it is.
fauigerzigerk|2 hours ago
"From my experience building and scaling teams in fintech and high-traffic platforms, I can tell you that role expansion without clear boundaries always leads to the same outcome: people try to do everything, nothing gets done with the depth it requires, and burnout follows."
This reads like a first person account of someone's experience. Is it though? If it's nobody's experience then it robs this text of its meaning. If it is somebody's experience and that person used AI to improve their style then that's absolutely fine with me.
randomtoast|3 hours ago
hsuduebc2|3 hours ago
ramoz|2 hours ago
oytis|8 hours ago
Looks like something AI would say. Regardless of how it really was written
butILoveLife|7 hours ago
Admittedly it was so long and basic, I stopped halfway.
383toast|4 hours ago
rhubarbtree|7 hours ago
rcvassallo83|7 hours ago
seethishat|7 hours ago
A surgeon (no coding experience) used Claude to write a web app to track certain things about procedures he had done. He deployed the app on a web hosting provided (PHP LAMP stack). He wanted to share it with other doctors, but wasn't sure if it was 'secure' or not. He asked me to read the code and visit the site and provide my opinion.
The code was pretty reasonable. The DB schema was good. And it worked as expected. However, he routinely zipped up the entire project and placed the zip files in the web root and he had no index file. So anyone who navigated to the website saw the backups named Jan-2026.backup, etc. and could download them.
The backups contained the entire DB, all the project secrets, DB connection strings, API credentials, AWS keys, etc.
He had no idea what an 'index' file was and why that was important. Last I heard he was going to ask Claude how to secure it.
dana321|6 hours ago
manofmanysmiles|6 hours ago
1) I guess I am not included in the set named "most software engineers."
2) If the title is "Software Engineer," I think I should be engineering, not coding.
This has probably been beaten to death, but I think this is the biggest disciminating question between "pro ai" and "against ai" in the software world is: "Dp you do (this) becuase you like writing code, or because you like building things for the world?"
Of course I don't think it's a binary decision.
Although I more more motivated by building things, I do somewhat miss the programmer flow state I used to get more often.
daemonk|6 hours ago
One concrete example of this realization was when I was researching how to optimize my claude code environment with agents, skills, etc. I read a lot of technical documents on how these supplemental plugins work and how to create them. After an hour of reading through all this, I realized I could just ask Claude to optimize the environment for me given the project context. So I did, and it was able to point out plugins, skills, agents that I can install or create. I gave it permission to create them and it all worked out.
This was a case of where I should not think more technically deeper, but at a more "meta" level to define the project enough for Claude to figure out how to optimize the environment. Whether that gave real gains is another question of course. But I have anecdotally observed faster results and less token usage due context caching and slightly more tools-directed prompts.
simianwords|8 hours ago
No jobs get easier with automation - they always move a step up in abstraction level.
An accountant who was super proficient in adding numbers no longer can rely on those skills once calculator was invented.
jghn|8 hours ago
This is the key. I haven't found that things have become harder. The hard parts are still hard, and those have been the most important and prominent parts of my job once I reached a certain level.
RevEng|5 hours ago
That said, there are plenty of amateurs who find coding to be approachable and system design to me daunting. For them, eliminating coding and moving the focus to system design would be a nightmare.
lelanthran|2 hours ago
I dunno about that. Look at blogging as an example - AI took away the "easy"[1] part of blogging, and now we are left with 90% crap AI-generated "articles" like the one you just read.
I feel it's the other way around - AI took away the hard parts, of both blogging and programming, and now what have to look forward to every single damn day is a deluge of AI slop of absolutely poor quality.
Continuing with the literature analogy (because this article was written by an AI), adding AI as a tool for authors isn't producing the next Terry Pratchett quicker, it's delaying the production of the next Terry Pratchett because the next Terry Pratchett will be drowned out by an unstoppable volume of AI slop.
After all, if you can't recognise obvious AI blog posts, what makes you think you can recognise poor code?
---------------------
[1] I am using the term as you are using it. I don't really believe that it took away the easy part.
383toast|4 hours ago
mads_quist|8 hours ago
jatins|6 hours ago
phyzome|6 hours ago
383toast|4 hours ago
mono442|5 hours ago
I don't think this is true. I'm pretty sure most of them do it because it pays good salary.
agentultra|8 hours ago
mark-r|6 hours ago
Deegy|6 hours ago
complex_pi|3 hours ago
Prompt: "The text of the blog article is copy-pasted below. Please summarize it in one parargaph."
Result: "The blog article explores the paradox that while AI has made coding easier by automating many tasks, it has also made the role of software engineers significantly harder. AI tools have raised expectations for productivity, leading to increased workloads and burnout without corresponding support or recognition. Engineers, especially juniors, are now expected to take on broader responsibilities—ranging from product thinking to deployment—while the hands-on act of writing code has diminished. This shift in roles and expectations is often unacknowledged by leadership, creating a disconnect between those who use AI tools and those who manage the work. The article emphasizes the growing complexity of the engineer's job, the burnout that results from unrealistic demands, and the identity crisis many engineers face as their core role changes. It calls for better leadership, structured training, and realistic expectations to avoid the erosion of trust and talent in the industry."
Whatever. Blog posts about AI that dance around the topic are not that valuable. For a project, you'll hire somebody, not some AI (as bcachefs' author does). What matters is that you trust a person to do the job, and expecting them to churn code is useless. The real benefits of someone appear on their CV (experience, known technologies, human abilities -- communication, empathy, understanding).
AI generated content is an existential threat to human knowledge.
EliRivers|6 hours ago
These, surely, are the skills they always needed? Anyone who didn't have these skills was little more than a human chatgpt already, receiving prompts and simply presenting the results to someone for evaluation.
whstl|6 hours ago
antaviana|5 hours ago
mads_quist|7 hours ago
What I never enjoyed was looking up the cumbersome details of a framework, a programming language or an API. It's really BORING to figure out that tool X calls paging params page and pageSize while Y offset and limit. Many other examples can be added. For me, I feel at home in so many new programming languages and frameworks that I can really ship ideas. AI really helps with all the boring stuff.
RevEng|5 hours ago
k__|7 hours ago
AI makes using them a breeze.
mountainriver|7 hours ago
I can actually build nice UIs as a traditional ML engineer (no more streamlit crap). People are using them and genuinely impressed by them
I can fly through Rust and C++ code, which used to take ages of debugging.
The main thing that is clear to me is that most of the ecosystem will likely converge toward Rust or C++ soon. Languages like Python or Ruby or even Go are just too slow and messy, why would you use them at all if you can write in Rust just as fast? I expect those languages to die off in the next several years
yawnxyz|4 hours ago
I was always a mediocre engineer, and stopping out on a personal usually happened bc "feature XYZ is way too hard to build and I won't spend another three weeks on it". Nowadays anything can be built in a couple of days, scope creep plus "would be cool if it could also do XYZ" makes it harder to walk away from a project and call it done.
But ofc these are personal projects, and I use them daily (like a personal workout system and tracker which I run w/ Claude Code, which love to call Claude Co-Workout). It doesn't "work" as a standalone app. It's mostly a "display system" for whatever CC outputs to me, so I can take the daily workout to the gym.
I got into software bc I liked to put out fun products and projects; I never really liked the process of writing software itself. But either way I'm still running into the "it's harder to put projects out than ever" dilemma, even though the projects are way easier to make, and higher quality than ever.
I'm wondering if it'd be fun to have a "Ask HN: Show us what you've build with (mostly) AI" thread?
383toast|4 hours ago
markus_zhang|5 hours ago
I'd say this -- if you really want to be a real engineer, you should avoid many career paths out there. Potentially ANY positions DIRECTLY facing business stakeholders is at best not a good choice, and at worst deprive your already remote chance to be a good engineer. The lower level you move into, the better, because the environment FORCES you to be a true engineer -- either you don't and fail, or you do and keep the job.
ahokay|7 hours ago
aerhardt|4 hours ago
gedy|6 hours ago
fzysingularity|5 hours ago
RivieraKid|7 hours ago
The scenario I'm somewhat worried about is that instead of 1 PM, 1 designer and 5 developers, there will be 1 PM, 1 designer and 1 developer. Even if tech employment stays stable or even slightly increases due to Jevons paradox, the share of software developers in tech employment will shrink.
blastro|6 hours ago
sda2|6 hours ago
Jasonleo|7 hours ago
wreath|7 hours ago
amelius|7 hours ago
Maybe this is not entirely true yet, but it most likely will be in the near future.
andai|5 hours ago
__bjoernd|5 hours ago
This resonates somewhat, but for a different reason. My mental model is that there are two kinds of developers, the craftsmen and the artists.
The artist considers the act of writing code their actual fulfillment. They thrive on beautifully written code. They are often attached to their code to a point where they will be hurt if someone criticizes (or even deletes) it.
The craftsman understands that code exists to serve a purpose and that is to make someone's life easier. This can be a totally non-technical customer/user that now can get their work done better. It could be another developer that benefits from using a library we wrote.
The artist hates LLMs as it takes away their work and replaces their works of beauty with generic, templatized code.
The craftsman acknowledges that LLMs are another tool in the toolbelt and using them will make them create more benefits for their customers.
whstl|6 hours ago
In the past, I would give them an assignment and they would take a few days to return with the implementation. I was able to see them struggling, they would learn, they would communicate and get frustrated by their own solution, then iterate.
Today, there are two kinds: 1) the ones who take a marginally smaller amount of time because they’re busy learning, testing and self reviewing, and 2) the ones who watch Twitch or Youtube videos while Claude does the job and come to me after two hours with “done, what’s next” while someone has to comb through the mess.
Leadership might see #2 and think they’re better, faster. But they are just a fucking boat anchor that drags down the whole team while providing nothing more than a shitty interface to an LLM in return.
smokel|8 hours ago
Interestingly, most jobs don't incentivize working harder or smarter, because it just leads to more work, and then burn-out.
[1] https://en.wikipedia.org/wiki/Automation#Paradox_of_automati...
randomtoast|7 hours ago
bgentry|6 hours ago
I think there's a big split between those who derive meaning and enjoyment from the act of writing code or the code itself vs. those who derive it from solving problems (for which the code is often a necessary byproduct). I've worked with many across both of these groups throughout my career.
I am much more in the latter group, and the past 12mo are the most fun I've had writing software in over a decade. For those in the first group, it's easy to see how this can be an existential crisis.
RevEng|5 hours ago
If you give an AI a very general prompt to make an app that does X, it could build that in any imaginable way. Someone who doesn't know how these things are done wouldn't understand what way was chosen and the trade-offs involved. If they don't even look at the code, they have no idea how it works at all. This is dangerous because they are entirely dependant on the AI to make good decisions and to make any changes in the future.
Someone who practices engineering by researching, considering their options, planning and designing, and creating a specification, leaves nothing up to chance. When the prompt is detailed, the outcome is constrained to the engineer's intent. If they then review the work by seeing that it wrote what they had in mind, they know that it worked and they know that the system design matches their own design. They know how it works because they designed it and they can modify that design. They can and have read the code so they can modify it without the help of the AI.
If you know what code you want generated, reviewing it is easy - just look and see if it's what you expected. If you didn't think ahead about what the code would look like, reviewing is hard because you have to start by figuring out what the codebase even does.
This goes the same for working in small iterations rather than prompting am entire application into existence. We all know how difficult it is to review large changes and why we prefer small changes. Those same rules apply for iterations regardless of whether it was written by a person or an AI.
AI code generation can be helpful if the engineer continues acting as an engineer. It's only when someone who isn't an engineer or when an engineer abdicates their responsibilities to the AI that we end up with an unmaintainable mess. It's no different than amateurs writing scripts and spreadsheets without a full understanding of the implications of their implementation. Good software comes from good engineering, not just generating code; the code is merely the language by which we express our ideas.
AstroBen|3 hours ago
MattyRad|6 hours ago
I stopped here. Was this written by an an LLM? This sentence in particular reads exactly like the author supplied said essay as context and this sentence is the LLM's summarization of it. Nowhere is the original article linked, either, further decreasing trust. Moreover, there's an ad at the bottom for some BS "talent" platform to hire the author. This article is probably an LLM generated ad.
My trust is vacated.
This makes me feel that the SWE work/identity crisis is less important than the digital trust crisis.
devsda|5 hours ago
A. Measurably demonstrate that atleast 50% of code/tests are AI generated.
B. X% Faster delivery timelines due to improved productivity tools.
You can't expect to make a pizza in 50% less time just because you bought a faster doughmaker. Specially when you don't even know whether the dough comes out under kneaded, over kneaded or as plain lumps!
zackify|8 hours ago
So for me being able to have AI wrote certain things extremely fast with me just doing voice to text with my specific approach, is amazing.
I am all in on everything AI and have a discord server just for openclaw and specialized per repo assistants. It really feels like when I'm busy I can throw it an issue tracker number for things.
Then I will ssh via vs code or regular ssh which forwards my ssh key from 1password. My agents have read only repo access and I can push only when I ssh in. Super secure. Sorry for the tangent to the article but I have always loved coding now I love it even more.
randomtoast|8 hours ago
> That is not an upgrade. That is a career identity crisis.
This is not X. It is Y.
> The trap is ...
> This gap matters ...
> This is not empowerment ...
> This is not a minor adjustment...
Your typical AI slop rhetorical phrasing.
Phrases like: "identity crisis", "burnout machine", "supervision paradox", "acceleration trap", "workload creep"
These sound analytical but are lightly defined. They function as named concepts without rigorous definition or empirical grounding.
There might be some good arguments in the article, but AI slop remains AI slop.
amelius|7 hours ago
jmbwell|6 hours ago
cheschire|7 hours ago
user____name|3 hours ago
That can't be right?
_pdp_|8 hours ago
In any case, I think we should start treating the majority of code as a commodity that will be thrown away sooner or later.
I wrote something about this here: https://chatbotkit.com/reflections/most-code-deserves-to-die - it was inspired by another conversation on HN.
jghn|8 hours ago
It never was
Twey|7 hours ago
acedTrex|5 hours ago
ekjhgkejhgk|4 hours ago
unknown|6 hours ago
[deleted]
383toast|4 hours ago
delecti|4 hours ago
turlockmike|6 hours ago
LLMs Can accelerate you if you use best practices and focus on provability and quality, but if you produce slop LLMs will help you produce slop faster.
ralferoo|7 hours ago
... most software engineers became engineers because they love writing code. Not managing code. Not reviewing code. Not supervising systems that produce code. Writing it. The act of thinking through a problem, designing a solution, and expressing it precisely in a language that makes a machine do exactly what you intended. That is what drew most of us to this profession. It is a creative act, a form of craftsmanship, and for many engineers, the most satisfying part of their day.
Actually surprised none of the other comments have picked up on this, as I don't think it's especially about AI. But the periods of my career when I've been actually writing code and solving complicated technical problems have been the most rewarding times in my life, and I'd frequently work on stuff outside work time just because I enjoyed it so much. But the other times when I was just maintaining other people's code, or working on really simple problems with cookie-cutter solutions, I get so demotivated that it's hard to even get started each day. 100%, I do this job for the challenges, not to just spend my days babysitting a fancy code generation tool.
ukuina|5 hours ago
Is this still true?
xyzsparetimexyz|8 hours ago
hsuduebc2|3 hours ago
alephnerd|6 hours ago
A SWE who bases their entire identity and career around only writing code is not an engineer - they are a code monkey.
The entire point of hiring a Software ENGINEER is to help translate business requirements into technical requirements, and then implement the technical requirements into a tangible feature or product.
The only reason companies buy software is because the alternative means building in-house, and for most industries software is a cost-center not a revenue generator.
I don't pay (US specific) 200K-400K TCs for code monkeys, I pay that TC for Engineers.
And this does a disservice to the large portion of SWEs and former SWEs (like me) who have been in the industry because we are customer-outcome driven (how do we use code to solve a tangible customer need) and not here to write pretty code.
gedy|6 hours ago
blondie9x|3 hours ago
wesm|5 hours ago
GeoAtreides|7 hours ago
it's all so fucking tiresome
cmrdporcupine|4 hours ago
"This is not a minor adjustment. It is a fundamental shift in professional identity. "
"That is not empowerment. That is scope creep without a corresponding increase in compensation"
Honestly, it's lazy. At least edit the bloody thing.
badgersnake|5 hours ago
retinaros|6 hours ago
bpodgursky|7 hours ago
THE MARKET WILL FILL THAT VOID
IT DOES NOT MAKE IT TRUE
hsuduebc2|3 hours ago
zzzeek|8 hours ago
baxuz|7 hours ago
Also, check out the dude's linkedin: https://www.linkedin.com/in/ivanturkovic/
fourthark|6 hours ago
nemo44x|7 hours ago
hackersk|8 hours ago
[deleted]
JimBlackwood|7 hours ago
For me, this is a bit different. Writing code has always been the bottleneck. I get most of my joy out of solving edge cases and finding optimizations. My favorite projects are when I’m given an existing codebase with the task, “When mars and venus are opposite eachother, the code gets this weird bug that we can’t reproduce.”
When a project requires me to start from scratch, it takes me a lot longer than most other people. Once I’ve thought of the architecture, I get bored with writing the implementation.
AI has made this _a lot_ easier for me.
I think the engineers who thrive wi be the ones know when to use what tool. This has been the case before AI, AI is just another tool allowing more people to thrive.
fma|7 hours ago
I'm not afraid of breaking stuff because it is only a small set of users. However for my own code for my professional job no way I would go that fast because I would impact millions of users.
It is insane that companies think they can replace teams wholesale while maintaining quality.
RivieraKid|7 hours ago
Aurornis|6 hours ago
> Why? Because the bottleneck was never typing code.
Were you also shipping side projects every 2 months before AI?
If not, this comment just reads like cognitive dissonance. Your core claim is that AI has enabled you to ship 7 projects in 12 months, which presumably was not something you did pre-AI, right? So the AI is helping ship projects faster?
I agree that AI is not a panacea and a skilled developer is required. I also agree that it can become a trap to produce a lot of bad code if you’re not paying attention (something a lot of companies are going to discover in 2026 IMO)
But I don’t know how you can claim AI isn’t helping you ship faster right after telling us AI is helping you ship faster.
xhrpost|4 hours ago
I think this isn't being discussed enough in the SWE world. It wasn't too long ago that engineers on HN would describe a line of code as "not an asset but a liability". Now that code is "free" though, I'm seeing more excessively verbose PRs at work. I'm trying to call it out and rein it in a bit but until engineers on average believe there is inherent risk here, the behavior will continue.
threethirtytwo|6 hours ago
I can guarantee you this... the story is not absolute. Depending on who you are and what you need to work on dev time could be slower, same or faster for you. BUT what we don't know is the proportion. Is it faster for 60% of people? 70%, 80%?
This is something we don't know for sure yet. But i suspect your instinct is completely wrong and that 90% of people are overall faster... much faster. I do agree that it produces more bugs and more maintenance hurdles but it is that much faster.
The thing is LLMs can bug squash too. AND they are often much faster at it then humans. My agentic set up just reads the incoming slack messages on the issue, makes a ticket, fixes the code and creates a PR in one shot.
onoht|4 hours ago
The part about the identity shift from builder to reviewer - that's the real thing nobody's talking about. I spent years getting good at turning thoughts into code. That's a craft. There's a rhythm to it, a kind of flow state you hit when the problem and the solution start locking together.
Now I spend most of my time evaluating code I didn't write, catching issues I didn't create, in systems I didn't design. The volume is higher. The satisfaction is lower.
The HBR study numbers track with what I'm seeing around me. 83% saying AI increased their workload. That's not a bug, that's the whole point. We made code production faster, so now we produce more code. Nobody stopped to ask if that was actually the bottleneck worth solving.
The thing that gets me is the pretense. Everyone talks about AI making engineers more productive. But if you look at what's actually happening, we're not producing better software. We're just producing more of it, faster, with the same number of people. That's not productivity - that's volume.
What's being lost is the time to think. To sit with a problem long enough that you actually understand it before you start implementing. The old friction of writing code manually gave you that thinking time by default. Now you have to fight for it.
Allybag|7 hours ago
peacebeard|5 hours ago
karl42|6 hours ago
Since managing dependencies is one of the major maintenance burdens in some of my projects (updating them, keeping their APIs in mind, complexity due to overgeneralization), this can help quite a lot.
See also https://www.karl.berlin/simplicity-by-llm.html for some of my thoughts regarding this.
Thanemate|7 hours ago
Anything else? I'll struggle and grow as a developer, thanks. And before anyone says "but there are architecture decisions etc. so you still grow"... those existed anyways. If I have to practice, I'll practice micro AND macro skills.
StrauXX|7 hours ago
3abiton|5 hours ago
victorzidaroiu|6 hours ago
rnimmer|4 hours ago
The shift I've experienced is something akin to being able to finally focus on the aspects I've always enjoyed most: architecture and user experience. I review all the code, but through iteration my prompts have gotten better, and for the most part my automated codemonkey 'employee' produces good code. It's not reasonable to expect complex things to be one-shot; UX improvements always require follow-ups, and features need to be divided and conquered one at a time. Engineers who lack those higher level skills will struggle. You are leading a small team now, not just plugging away at implementing user stories.
eunos|7 hours ago
I think from time to time, it's better to ask the AI whether the codebase could be cleaned and simplified. Much better if you use different AI than what you use to make the project.
Brysonbw|4 hours ago
mark-r|6 hours ago
zozbot234|5 hours ago
The AI can help you in these tasks too, but you need to ask for the help in terms that it can help you with, and not expect it to be genuinely intelligent or to have a crystal ball. As a bonus, once you've gotten these things into the agentic context, the code itself becomes better too.
One-shotted vibe coding is an anti-pattern.
cornholio|7 hours ago
mountainriver|7 hours ago
383toast|4 hours ago
Shadowmist|5 hours ago
sh3rl0ck|6 hours ago
As you mentioned, scope definition and constraints play a major role but ensuring that you don't just go for the first slop result but refine it pays off. It helps to have a very clear mental model of feature constraints that doesn't fall prey to scope creep.
westurner|7 hours ago
Were you able to fairly split test?
altmanaltman|7 hours ago
Why will the 8th project still have those things as the bottleneck given your experience?
Also if you're not seeing any real gains in productivity, why are you using AI for your side projects and wasting tokens/money?
TacticalCoder|7 hours ago
One area --and many may not like that fact-- where it can help greatly is that the cost of adding tests also drops to near zero and that doesn't work against us (because tests are typically way more localized and aren't the maintenance burden production code is). And a some us were lazy and didn't like to write too many tests. Or take generative testing / fuzzy testing: writing the proper generators or fuzzers wasn't always that trivial. Now it could become much easier.
So we may be able to use the AI slop to help us have more correct code. Same for debugging edge cases: models can totally help (I've had case as simple as a cryptic error message which I didn't recognize: passed it + the code to a LLM and it could tell me what the error was).
But yup it's a given that, as you put it, when the marginal cost of adding complexity drops to near zero, we're opening a whole new can of worms.
TFA is AI slop but fundamentally it may not be incorrect: the gigantic amount of generated sloppy code needs to be kept in check and that's where engineering is going to kick in.
snowhale|6 hours ago
[deleted]
syndacks|6 hours ago
locallost|7 hours ago
Another little thing that resonated was a tweet that said "some will use it to learn everything and some so that they don't have to learn anything ". Of course it's not really a hard truth. It's questionable how much you can learn without really getting your hands dirty. But I do think people looking at it as a tool that helps then and/or makes them better will profit more than people looking to cut corners.
dankobgd|6 hours ago
unknown|7 hours ago
[deleted]
tamimio|7 hours ago
SignalStackDev|4 hours ago
[deleted]
adrian-vega|4 hours ago
[deleted]
aplomb1026|3 hours ago
[deleted]
buttermeup|4 hours ago
[deleted]
newzino|6 hours ago
[deleted]
throwaway613746|7 hours ago
[deleted]
nimbus-hn-test|7 hours ago
[deleted]
unknown|6 hours ago
[deleted]
nunez|4 hours ago
> Not managing code. Not reviewing code. Not supervising systems that produce code. Writing it. The act of thinking through a problem, designing a solution, and expressing it precisely in a language that makes a machine do exactly what you intended. That is what drew most of us to this profession. It is a creative act, a form of craftsmanship, and for many engineers, the most satisfying part of their day.
> Now they are being told to stop.
Yeah, so what I've been realizing from witnessing the Rise of the Agents™ is that there are tons of developers that actually don't like writing code and were in it for the money all along. Nothing wrong with money --- I love the green stuff myself --- but it definitely sucks to have their ambivalence (at best) or disdain (at worst) for the craft imposed on the rest of us.
Feel free to replace `writing code` for most work functions that are enjoyable for some that are being steamrolled by Big AI atm (writing, graphic design, marketing copy, etc.).
seanmcdirmid|4 hours ago
And yes, there are also traditionalists who think the old ways are the best ways.