top | item 46424233

The future of software development is software developers

421 points| cdrnsf | 3 months ago |codemanship.wordpress.com | reply

566 comments

order
[+] solaire_oa|3 months ago|reply
Most people in this thread are quibbling about the exact degree of utility LLMs provide, which a tedious argument.

What's more interesting to me is, per the article, the concern regarding everyone who is leaning into LLMs without realizing (or downplaying) the exorbitant, externalized cost. Our current LLM usage is being subsidized to the point of being free by outside investment. One day when the well runs dry, you must be able to either pay the actual cost (barring grand technology breakthroughs), or switch back to non-LLM workflows. I run local LLMs infrequently, and every single prompt makes my beefy PC sounds like a jet engine taking off. It's a great reminder to not become codependent.

[+] snickerer|3 months ago|reply
After working with agent-LLMs for some years now, I can confirm that they are completely useless for real programming.

They never helped me solve complex problems with low-level libraries. They can not find nontrivial bugs. They don't get the logic of interwoven layers of abstractions.

LLMs pretend to do this with big confidence and fail miserably.

For every problem I need to turn my brain to ON MODE and wake up, the LLM doesn't wake up.

It surprised me how well it solved another task: I told it to set up a website with some SQL database and scripts behind it. When you click here, show some filtered list there. Worked like a charm. A very solved problem and very simple logic, done a zillion times before. But this saved me a day of writing boilerplate.

I agree that there is no indication that LLMs will ever cross the border from simple-boilerplate-land to understanding-complex-problems-land.

[+] spicyusername|3 months ago|reply

    I can confirm that they are completely useless for real programming
And I can confirm, with similar years of experience, that they are not useless.

Absolutely incredible tools that have saved hours and hours helping me understand large codebases, brainstorm features, and point out gaps in my implementation or understanding.

I think the main disconnect in the discourse is that there are those pretending they can reliably just write all the software, when anyone using them regularly can clearly see they cannot.

But that doesn't mean they aren't extremely valuable tools in an engineer's arsenal.

[+] bdcravens|3 months ago|reply
"real programming"

Perhaps you're doing some amazing low-level work, but it feels like you're way overestimating how much of our industry does that. A massive amount of developers show up to work every day and just stitch together frameworks and libraries.

In many ways, it feels similar to EVs. Just because EVs aren't yet, and may never be, effective to moving massive amounts of cargo in a day with minimal refueling, doesn't mean that they aren't an effective solution for the bulk of drivers who have an average commute of 40 miles a day.

[+] JeremyNT|3 months ago|reply
> After working with agent-LLMs for some years now, I can confirm that they are completely useless for real programming

This is a bit of no-true-scottsman, no? For you "real programming" is "stuff LLMs are bad at," but a lot of us out in the real world are able to effectively extract code that meets the requirements of our day jobs from tossing natural language descriptions into LLMs.

I actually find the rise of LLM coding depressing and morally problematic (re copyright / ownership / license laundering), and on a personal level I feel a lot of nostalgia for the old ways, but I simply can't levy an "it's useless" argument against this stuff with any seriousness.

[+] perhapsAnLLM|3 months ago|reply
"they are completely useless for real programming"

You and I must have completely different definitions of "real programming". In this very comment, you described a problem that the model solved. The solution may not have involved low-level programming, or discovering a tricky bug entrenched in years-worth of legacy code, but still a legitimate task that you, as a programmer, would've needed to solve otherwise. How is that not "real programming"?

[+] underdeserver|3 months ago|reply
People are saying Codex 5.2 fullsolved crypto challenges in 39C3 CTF last weekend.

Three months ago I would have agreed with you, but anecdotal evidence says Codex 5.2 and Opus 4.5 are finally there.

[+] solumunus|3 months ago|reply
It’s crazy how different my experience is. I think perhaps it’s incredibly important what programming language you are using, what your project and architecture is like. Agents are making an extraordinary contribution to my productivity. If they jacked my Claude Code subscription up to $500/month I would be upset but almost certainly would keep paying it, that’s how much value it brings.

I’m in enterprise ERP.

[+] fragmede|3 months ago|reply
> After working with agent-LLMs for some years now, I can confirm that they are completely useless for real programming.

"completely useless" and "real programming" are load bearing here. Without a definition to agree on for those terms, it's really hard not to read that as you're trying to troll us by making a controversial unprovable claim that you know will get people that disagree with you riled up. What's especially fun is that you then get to sneer at the abilities of anybody making concrete claims by saying "that's not real programming".

How tiresome.

[+] furyofantares|3 months ago|reply
> After working with agent-LLMs for some years now

Some years? I don't remember any agents being any good at all before just over 1 year ago with Cursor and stuff really didn't take off until Claude Code.

Which isn't to say you weren't working with agent-LLMs before that, but I just don't know how relevant anything but recent experience is.

[+] bwfan123|3 months ago|reply
> I can confirm that they are completely useless for real programming

Can you elaborate on "real programming" ?

I am assuming you mean the simplest hard problem that is solved. The value of the work is measured in those terms. Easy problems have boilerplate solutions and have been solved numerous times in the past. LLMs excel here.

Hard problems require intricate woven layers of logic and abstraction, and LLMs still struggle since they do not have causal models. The value however is in the solution of these kinds of problems since the easy problems are assumed to be solved already.

[+] lr4444lr|3 months ago|reply
> After working with agent-LLMs for some years now, I can confirm that they are completely useless for real programming. > They never helped me solve complex problems with low-level libraries. They can not find nontrivial bugs. They don't get the logic of interwoven layers of abstractions.

This was how I felt until about 18 months ago.

Can you give a single, precise example where modern day LLMs fail as woefully as you describe?

[+] beeboop0|3 months ago|reply
i had to disable baby Ceph (Deepseek 3.1) from writing changes in Continue because he's like a toddler. But, he did confirm some solutions and wrote a routine and turn me on to some libraries, etc

so I see what you're saying. he comes up with the wrong answers a lot to a problem involving a group of classes in related files

however it's Continue, so it can read files in vs code which is really nice and that helps a lot with its comprehension so sometimes it does find the issue or at least the nature of the issue

I tend to give it bug n-1 to pre digest while I work on bug n

[+] constantcrying|3 months ago|reply
>After working with agent-LLMs for some years now, I can confirm that they are completely useless for real programming.

>They never helped me solve complex problems with low-level libraries. They can not find nontrivial bugs. They don't get the logic of interwoven layers of abstractions.

>LLMs pretend to do this with big confidence and fail miserably.

This is true for most developers as well. The mean software developer, especially if you outsource, has failure modes worse than any LLM and round-trip time is not seconds but days.

The promise of LLMs is not that they solve the single most difficult tasks for you instantly, but that they do the easy stuff well enough that they replace offshore teams.

[+] wiz21c|3 months ago|reply
Claude is currently porting my rust emulator to WASM. It's not easy at all, it struggles, I need to guide it quite a lot but it's way easier to let him do it than me learning yet another tech. For the same result I have 50% the mental load...
[+] dawnerd|3 months ago|reply
The idea they're good for development is propped up a lot by people able to have a react + tailwind site spun up fast. You know what also used to be able to scaffold projects quickly? The old init scripts and generators!
[+] mohsen1|3 months ago|reply
I really really want this to be true. I want to be relevant. I don’t know what to do if all those predictions are true and there is no need (or very little need) for programmers anymore.

But something tells me “this time is different” is different this time for real.

Coding AIs design software better than me, review code better than me, find hard-to-find bugs better than me, plan long-running projects better than me, make decisions based on research, literature, and also the state of our projects better than me. I’m basically just the conductor of all those processes.

Oh, and don't ask about coding. If you use AI for tasks above, as a result you'll get very well defined coding task definitions which an AI would ace.

I’m still hired, but I feel like I’m doing the work of an entire org that used to need twenty engineers.

From where I’m standing, it’s scary.

[+] dataviz1000|3 months ago|reply
I was a chef in Michelin-starred restaurants for 11 years. One of my favorite positions was washing dishes. The goal was always to keep the machine running on its 5-minute cycle. It was about getting the dishes into racks, rinsing them, and having them ready and waiting for the previous cycle to end—so you could push them into the machine immediately—then getting them dried and put away after the cycle, making sure the quality was there and no spot was missed. If the machine stopped, the goal was to get another batch into it, putting everything else on hold. Keeping the machine running was the only way to prevent dishes from piling up, which would end with the towers falling over and breaking plates. This work requires moving lightning fast with dexterity.

AI coding agents are analogous to the machine. My job is to get the prompts written, and to do quality control and housekeeping after it runs a cycle. Nonetheless, like all automation, humans are still needed... for now.

[+] bigstrat2003|3 months ago|reply
> Coding AIs design software better than me, review code better than me, find hard-to-find bugs better than me, plan long-running projects better than me, make decisions based on research, literature, and also the state of our projects better than me.

That is just not true, assuming you have a modicum of competence (which I assume you do). AIs suck at all these tasks; they are not even as good as an inexperienced human.

[+] foxygen|3 months ago|reply
I think I've been using AI wrong. I can't understand testimonies like this. Most times I try to use AI for a task, it is a shitshow, and I have to rewrite everything anyway.
[+] belter|3 months ago|reply
>> From where I’m standing, it’s scary.

You are being fooled by randomness [1]

Not because the models are random, but because you are mistaking a massive combinatorial search over seen patterns for genuine reasoning. Taleb point was about confusing luck for skill. Dont confuse interpolation for understanding.

You can read a Rust book after years of Java, then go build software for an industry that did not exist when you started. Ask any LLM to write a driver for hardware that shipped last month, or model a regulatory framework that just passed... It will confidently hallucinate. You will figure it out. That is the difference between pattern matching and understanding.

[1] https://en.wikipedia.org/wiki/Fooled_by_Randomness

[+] btbuildem|3 months ago|reply
They do all those things you've mentioned more efficiently than most of us, but they fall woefully short as soon as novelty is required. Creativity is not in their repertoire. So if you're banging out the same type of thing over and over again, yes, they will make that work light and then scarce. But if you need to create something niche, something one-off, something new, they'll slip off the bleeding edge into the comfortable valley of the familiar at every step.

I choose to look at it as an opportunity to spend more time on the interesting problems, and work at a higher level. We used to worry about pointers and memory allocation. Now we will worry less and less about how the code is written and more about the result it built.

[+] Deep-Blue|3 months ago|reply
As of today NONE of the known AI codebots can solve correctly ANY of the 50+ programming exercises we use to interview fresh grads or summer interns. NONE! Not even level 1 problems that can be solved in fewer than 20 lines of code with a bit of middle school math.
[+] to11mtm|3 months ago|reply
It's definitely scary in a way.

However I'm still finding a trend even in my org; better non-AI developers tend to be better at using AI to develop.

AI still forgets requirements.

I'm currently running an experiment where I try to get a design and then execute on an enterprise 'SAAS-replacement' application [0].

AI can spit forth a completely convincing looking overall project plan [1] that has gaps if anyone, even the AI itself, tries to execute on the plan; this is where a proper, experienced developer can step in at the right steps to help out.

IDK if that's the right way to venture into the brave new world, but I am at least doing my best to be at a forefront of how my org is using the tech.

[0] - I figured it was a good exercise for testing limits of both my skills prompting and the AI's capability. I do not expect success.

[+] chii|3 months ago|reply
> I’m basically just the conductor of all those processes.

a car moves faster than you, can last longer than you, and can carry much more than you. But somehow, people don't seem to be scared of cars displacing them(yet)? Perhaps autodriving would in the near future, but there still needs to be someone making decisions on how best to utilize that car - surely, it isn't deciding to go to destination A without someone telling them.

> I feel like I’m doing the work of an entire org that used to need twenty engineers.

and this is great. A combine harvester does the work of what used to be an entire village for a week in a day. More output for less people/resources expended means more wealth produced.

[+] Desafinado|3 months ago|reply
That's kind of the point of the article, though.

Sure LLMs can churn out code, and they sort of work for developers who already understand code and design, but what happens when that junior dev with no hard experience builds their years of experience with LLMs?

Over time those who actually understand what the LLMs are doing and how to correct the output are replaced by developers who've never learned the hard lessons of writing code line by line. The ability to reason about code gets lost.

This points to the hard problem that the article highlights. The hard problem of software is actually knowing how to write it, which usually takes years, sometimes up to a decade of real experience.

Any idiot can churn out code that doesn't work. But working, effective software takes a lot of skill that LLMs will be stripping people of. Leaving a market there for people who have actually put the time in and understand software.

[+] jayd16|3 months ago|reply
My experience with these tools is far and away no where close to this.

If you're really able to do the work of a 20 man org on your own, start a business.

[+] gingersnap|3 months ago|reply
This is not how I think about it. Me and the coding assistant is better then me or the coding assistant separately.

For me its not about me or the coding assistant, its me and the coding assistant. But I'm also not a professional coder, i dont identify as a coder. I've been fiddling with programming my whole life, but never had it as title, I've more worked from product side or from stakeholder side, but always got more involved, as I could speak with the dev team.

This also makes it natural for me to work side-by-side with the coding assistant, compared maybe to pure coders, who are used to keeping the coding side to themselves.

[+] zsoltkacsandi|3 months ago|reply
I have been using the most recent Claude, ChatGPT and Gemini models for coding for a bit more than a year, on a daily basis.

They are pretty good at writing code *after* I thoroughly described what to do, step by step. If you miss a small detail they get loose and the end result is a complete mess that takes hours to clean up. This still requires years of coding experience, planning ahead in head, you won't be able to spare that, or replace developers with LLMs. They are like autocomplete on steroids, that's pretty much it.

[+] germandiago|3 months ago|reply
I am sorry to say you are not a good programmer.

I mean, AIs can drop something fast the same way you cannot beat a computer at adding or multiplying.

After that, you find mistakes, false positives, code that does not work fully, and the worse part is the last one: code that does not work fully but also, as a consequence, that you do NOT understand yet.

That is where your time shrinks: now you need to review it.

Also, they do not design systems better. Maybe partial pieces. Give them something complex and they will hallucinate worse solutions than what you already know if you have, let us say, over 10 years of experience programming in a language (or mabye 5).

Now multiply this unreliability problem as the code you "AI-generate" grows.

Now you have a system you do not know if it is reliable and that you do not understand to modify. Congrats...

I use AI moderately for the tasks is good at: generate some scripts, give me this small typical function amd I review it.

Review my code: I will discard part of your mistakes and hallucinations as a person that knows well the language and will find maybe a few valuable things.

Also, when reviewing and found problems in my code I saw that the LLMs really need to hallucinate errors that do not exist to justify their help. This is just something LLMs seem to not be accurate at.

Also, when problems go a bit more atypical or past a level of difficulty, it gets much more unreliable.

All in all: you are going to need humans. I do not know how many, I do not know how much they will improve. I just know that they are not reliable and this "generate-fast-unreliable vs now I do not know the codebase" is a fundamental obstacle that I think it is if not very difficult, impossible to workaround.

[+] alexwhb|3 months ago|reply
As much as my first gut reaction to this article was to be excited about its conclusion, I can’t say my experience matches up. Are LLMs perfect? Absolutely not, but I can point to many examples in my own work where using Codex has saved me easily a week or more—especially when it comes to tedious refactors. I don’t agree with the conclusion; the real-world improvement and progress I’ve seen in the last year in the problem-solving quality of these agents has been pretty astounding.

The reason for my excitement about the conclusion is obvious: I want programmers and people to stay in demand. I find the AI future to be quite bleak if this tech really does lead to AGI, but maybe that’s just me. I think we’re at a pretty cool spot with this technology right now—where it’s a powerful tool—but at some point, if or when it’s way smarter than you or me… I'm not sure that's a fun happy future. I think work is pretty tied to our self worth as humans.

[+] trashb|3 months ago|reply
> Edgar Dijkstra called it nearly 50 years ago: we will never be programming in English, or French, or Spanish. Natural languages have not evolved to be precise enough and unambiguous enough. Semantic ambiguity and language entropy will always defeat this ambition.

This is the most important quote for any AI coding discussion.

Anyone that doesn't understand how the tools they use came to be is doomed to reinvent them.

> The folly of many people now claiming that “prompts are the new source code”,

These are the same people that create applications in MS Excel.

[+] mikewarot|3 months ago|reply
>WYSIWYG, drag-and-drop editors like Visual Basic and Delphi were going to end the need for programmers.

VB6 and Delphi were the best possible cognitive impedance match available for domain experts to be able to whip up something that could get a job done. We haven't had anything nearly as productive in the decades since, as far as just letting a normie get something done with a computer.

You'd then hire an actual programmer to come in and take care of corner cases, and make things actually reliable, and usable by others. We're facing a very similar situation now, the AI might be able to generate a brittle and barely functional program, but you're still going to have to have real programmers make it stable and usable.

[+] EagnaIonat|3 months ago|reply
I read a book called "Blood in the machine". It's the history of the Luddites.

It really put everything into perspective to where we are now.

Pre-industrial revolution whole towns and families built clothing and had techniques to make quality clothes.

When the machines came out it wasn't overnight but it wiped out nearly all cottage industries.

The clothing it made wasn't to the same level of quality, but you could churn it out faster and cheaper. There was also the novelty of having clothes from a machine which later normalised it.

We are at the beginning of the end of the cottage industry for developers.

[+] jader201|3 months ago|reply
I feel like the comments/articles that continue to point out how LLMs cannot solve complex problems are missing a few important points:

1. LLMs are only getting better from here. With each release, we continue to see improvements in their capabilities. And strong competition is driving this innovation, and will probably not stop anytime soon. Much of the world is focused on this right now, and therefore much of the world’s investments are being poured into solving this problem.

2. They’re using the wrong models for the wrong job. Some models are better than others at some tasks. And this gap is only shrinking (see point 1).

3. Even if LLMs can’t solve complex problems (and I believe they can/will, see points 1 and 2), much of our jobs is refactoring, writing tests, and hand coding simpler tasks, which LLMs are undeniably good at.

4. It’s natural to deny LLMs can eventually replace much/all of what we do. Our careers depend on LLMs not being able to solve complex problems so that we don’t risk/fear losing our careers. Not to mention the overall impact to the general way our lives are impacted if AGI becomes a reality.

I’ve been doing this a while, and I’ve never seen the boost to productivity that LLMs bring. Yes, I’ve seen them make a mess of things and get things wrong. But see points 1-3.

[+] harrisreynolds|3 months ago|reply
I think "The Bitter Lesson" is relevant here [1].

This wave IS different.

It isn't a matter of "if" but "when".

I am a long-time software developer too... but I am strongly encouraging people to embrace the future now and get creative in finding ways to adapt.

There will always be demand for smart and creative people. BUT ... if you don't look up and look around you right now at some point you will be irrelevant.

Also see Ray Dalio's "Principles" book on embracing reality. A critical principle in the modern age of AI.

Nothing but love for my fellow developers!!

[1] http://www.incompleteideas.net/IncIdeas/BitterLesson.html

[+] robofanatic|3 months ago|reply
why didn't ready to eat meals available so cheaply at any grocery store make the local mom and pop restaurants irrelevant?
[+] d_silin|3 months ago|reply
In aviation safety, there is a concept of "Swiss cheese" model, where each successful layer of safety may not be 100% perfect, but has a different set of holes, so overlapping layers create a net gain in safety metrics.

One can treat current LLMs as a layer of "cheese" for any software development or deployment pipeline, so the goal of adding them should be an improvement for a measurable metric (code quality, uptime, development cost, successful transactions, etc).

Of course, one has to understand the chosen LLM behaviour for each specific scenario - are they like Swiss cheese (small numbers of large holes) or more like Havarti cheese (large number of small holes), and treat them accordingly.

[+] rr808|3 months ago|reply
The biggest threat to American software engineers is outsourcing, AI is just a distraction. I am an immigrant, I work at a prestigious financial corporation in NYC. Pretty much 95% of the staff were born and did undergraduate degree in other countries. We hire a few grads but they usually quit or get laid off after a few years - most new hires are H1Bs or contractors on H1Bs. Just about to open another big office in a developing country.
[+] bdcravens|3 months ago|reply
Perhaps, but that's been going on for decades.
[+] aizk|3 months ago|reply
This time it actually is different. HN might not think so, but HN is really skewed towards more senior devs, so I think they're out of touch with what new grads are going through. It's awful.
[+] QuiEgo|3 months ago|reply
I think the only thing we can say about the future of LLMs is “we don’t know.” Everything is changing too fast. Models from a year ago are already obsolete. We seem to be hitting the top of an asymptote on improvements, but things have not been steady state for long enough to know. I also agree with the author that the VC money is driving all of this, and at some point the check is going to come due.

This thread is full of antidotes from “AI is useless for me” to “AI changes everythingfor me” I believe each and every one of them.

In firm wait and see mode.

[+] laszlojamf|3 months ago|reply
The way I see it, the problem with LLMs is the same as with self-driving cars: trust. You can ask an LLM to implement a feature, but unless you're pretty technical yourself, how will you know that it actually did what you wanted? How will you know that it didn't catastrophically misunderstand what you wanted, making something that works for your manual test cases, but then doesn't generalize to what you _actually_ want to do? People have been saying we'll have self-driving cars in five years for fifteen years now. And even if it looks like it might be finally happening now, it's going glacially slow, and it's one run-over baby away from being pushed back another ten years.
[+] frankie_t|3 months ago|reply
Just like the pro-AI articles, it reads to me like a sales pitch. And the ending only adds to it: the author invites to hire companies to contract him for training.

I would only be happy if in the end the author turns out to be right.

But as the things stand right now, I can see a significant boost to my own productivity, which leads me to believe that fewer people are going to be needed.

[+] pjmlp|3 months ago|reply
As someone having watched AI systems being good enough to replace jobs like content creation on CMS, this is being in denial.

Yes software developer are still going to be need, except much fewer of us, exactly like fully automated factories still need a few humans around, to control and build the factory in first place.

[+] simonw|3 months ago|reply
I nodded furiously at this bit:

> The hard part of computer programming isn't expressing what we want the machine to do in code. The hard part is turning human thinking -- with all its wooliness and ambiguity and contradictions -- into computational thinking that is logically precise and unambiguous, and that can then be expressed formally in the syntax of a programming language.

> That was the hard part when programmers were punching holes in cards. It was the hard part when they were typing COBOL code. It was the hard part when they were bringing Visual Basic GUIs to life (presumably to track the killer's IP address). And it's the hard part when they're prompting language models to predict plausible-looking Python.

> The hard part has always been – and likely will continue to be for many years to come – knowing exactly what to ask for.

I don't agree with this:

> To folks who say this technology isn’t going anywhere, I would remind them of just how expensive these models are to build and what massive losses they’re incurring. Yes, you could carry on using your local instance of some small model distilled from a hyper-scale model trained today. But as the years roll by, you may find not being able to move on from the programming language and library versions it was trained on a tad constraining.

Some of the best Chinese models (which are genuinely competitive with the frontier models from OpenAI / Anthropic / Gemini) claim to have been trained for single-digit millions of dollars. I'm not at all worried that the bubble will burst and new models will stop being trained and the existing ones will lose their utility - I think what we have now is a permanent baseline for what will be available in the future.

[+] farazbabar|3 months ago|reply
I am good at software. It turns out that isn’t sufficient, or alternatively stated, you have to be good at a number of other things than just churning code, even good code. So to me, the combination of being good at software, understanding complexity and ability articulate it concisely and precisely, when combined with the latest and greatest LLMs, is magic. I know people want to examples of success, I wish I could share what we are working on, but it is unbelievable how much more productive our team is, and I promise, we are solving novel problems, some that have not been tackled yet, at least not in any meaningful way. And I am having time of my life doing what I love, coding. This is not to downplay folks who are having a hard time with LLMs or agents, I think, it’s a skill that you can learn, if you are already good at software and the adjacencies.
[+] dilly-dally|3 months ago|reply
The more articles written like this only reflect the inevitable. This is not an era of slow progress which may have supported the authors opinion. This era is a rapid replacement of what was once dominated by masters of one who edged out others and arrogantly tried to hold onto their perceived positions of all knowing. There will always be those types unfortunately, but when the masters of one realize theyve wasted time stalling the inevitable, instead of accepting and guiding the path forward and opening the door to a broader audience, you will see a lot more articles like this which are a clear signal to many of us that theyre simply doing and saying too much trying to hold on to their perceived worth.