top | item 41761007

AI won't replace human devs anytime soon

109 points| skeptrune | 1 year ago |twitter.com

169 comments

order
[+] 123yawaworht456|1 year ago|reply
95% of real dev work is digging through piles of brownfield spaghetti to fix a bug or implement a feature. human-provided descriptions of the bug to fix or the feature to implement are often so vague that even a human developer can barely comprehend them. also more often than not, a visual link is required between the developer and the product being fixed/improved.

hypothetical AI will replace humans, sure. LLMs probably won't. original GPT4 was the best coding LLM I dealt with, and it was 1.5+ years ago. the context size was pitiful, yes, but it was a lot smarter than the current crop of big models.

[+] lokar|1 year ago|reply
Even just the ability to reproduce a non-trivial bug, and then work out what general area of the codebase might be in seems out of reach for an LLM
[+] moi2388|1 year ago|reply
I hate to break it to you, but that’s exactly what ChatGPT o1 excels at.

I swear to god, it’s better than our junior devs fresh out of college.

Like, I let the LLM solve a PBI and it’s just faster with better code.

Actual devs will still have a job, but the low hanging write some code guys? They’re done. And a lot of companies had a bunch of them..

[+] scruple|1 year ago|reply
And that's to say nothing of the fact that the programs we build exist in an ecosystem of other programs, and sometimes you're lucky if you even have a copy of the source available... And that's without mentioning that it's all networked and you're dependent on other systems all the way down. So much of the work I do today is black box debugging where I can only make broad assumptions about the behavior of systems that are distant to my own based on their past performance, based on some notes someone took from a meeting 6 months ago, based on what the engineers on that team believe their systems are doing, etc...

A lot of times at work, support teams modify their systems and don't bother communicating what they're doing, they seem to rely on scream tests to figure out who is even using their services and for what purpose. It's a total fucking mess and I laugh whenever I hear someone suggest that "AI" is going to replace what I do when some massive % of what I do has fucking nothing to do with code at all.

[+] refurb|1 year ago|reply
It would be interesting if AI could be used to isolate bugs?

Have it run through all the branches of spaghetti code and isolate the problematic sections along with proposed changes.

[+] 015a|1 year ago|reply
> original GPT4 was the best coding LLM I dealt with, and it was 1.5+ years ago. the context size was pitiful, yes, but it was a lot smarter than the current crop of big models.

Anytime someone says "this is the worst it'll ever be" I can't help but think "oh sweet summer child"; and then I ask them, what's your favorite operating system you've ever used?

The answer heavily depends on the person's age. Windows 10 or 7 dominates the conversation, and for slightly older folks, XP. "MacOS Snow Leopard" gets said a lot, that was a beloved MacOS release because they very famously said "no new features we're just fixing the old ones" and they actually did it.

No one ever says "MacOS Sequoia" or "Windows 11". Man, the discourse surrounding the new iOS 18 Photos app is reaching a fever pitch.

Sorry for the weird music on this, but its the first clip I could find of even, gasp, Elon Musk, stating this very real fact [1]: Technology does not automatically improve. In fact, it seems like especially software loves to get worse, and it takes a focused effort, from talented individuals, as far removed from money and business and agilescrum as possible to keep things operationally stable.

But, sure, definitely the new metaversecryptoai bubblescam will escape that and be built to the highest possible quality. There definitely won't be investors and men in suits in every room saying "we're losing billions is there a knob we can turn to reduce electricity usage?" OpenAI is definitely raising billions to fund the R&D of the next level of intelligence, and certainly not because without that capital they'd be bankrupt in three months.

[1] https://www.youtube.com/watch?v=aDC6kWC8y2Y

[+] kgilpin|1 year ago|reply
I think people / the market have gotten a little too excited about something AI is actually pretty bad at - making changes to existing code (which is, after all, most of the code).

AI software devs don’t understand requirements well, and they don’t write code that confirms to established architecture practices. They will happily create redundant functions and routes and tables, in order to deliver “working code”.

So AI coding is bunk? No, it’s just that the primary value lies elsewhere than code generation. You can stuff A LOT of context into an LLM and ask it to explain how the system works. You can ask it to create a design that emulates existing patterns. You can feed it a diff of code and ask it to look for common problems and anti-patterns. You can ask it create custom diagrams, documentation and descriptions and it will do so quickly and accurately.

These are all use cases that assist with coding, yet don’t involve actually writing code. They make developers more knowledgeable and they assist with decision making and situational awareness. They reduce tedium and drudgery without turning developers into mindless “clickers of the Tab key”.

[+] mattfrommars|1 year ago|reply
Neal Wu recently posted a video of him attending Meta Hack Coding challenge. This time around, Meta has two score board, one for all human and the other one, I believe is AI assisted. Shockingly, the leader board for human was significantly faster than leaderboard for folks who used AI.

We have a long way to go before AI surpasses human.

[+] kaushikc|1 year ago|reply
I don't care about writing code, all I care is that my work (non-tech) needs to be done. I code out of necessity to save more time, reduce labour and errors. LLM has made a non-programmer like myself and given me super powers, all I have to do is ask the right questions and I am directed to somewhat of a reasonable place to look for a solution and create ones. I can create CRUD apps, use api's and various stacks from multiple programming languages while referencing documentation to build and practice building pretty much what I desire which could have costed me a fortune and a lot of time from professional coders, some work I could not give to anyone else for maintaining confidentiality.
[+] yashvg|1 year ago|reply
While I understand the desire to reassure developers, I think this perspective seriously underestimates the pace of progress in AI. Just 3-4 years ago, the idea of AI writing any functional code seemed far-fetched. Now they can handle many coding tasks competently.

The author lists specific tasks LLMs can't do today. But there's no fundamental reason they won't be able to in the near future. Domain expertise, understanding downstream effects, configuring CI pipelines - these are all learnable patterns. As models get larger, are trained on more diverse datasets, size of context window increases, and new architectures emerge, these capabilities will come online rapidly. The jump from GPT-3 to GPT-4 was substantial, and we should expect continued leaps.

This doesn't mean human developers will become obsolete overnight. But it does mean the nature of software development work is likely to change significantly. Lower-level coding tasks may be increasingly automated, shifting focus to higher-level design, architecture, and problem framing.

Rather than dismissing the potential impact, we should be preparing for a world where AI significantly augments or even replaces many current development tasks. This might involve focusing more on skills that complement AI capabilities or exploring new areas where human creativity and insight remain critical.

[+] lurking_swe|1 year ago|reply
current AI is already stressing the power grid, and much of it will need to be redeveloped and improved just to keep pushing the limits of LLM’s. Power is the limiting factor with scaling here, so i’m rather unconvinced with your hypothesis. The improvements in the last 2-3 years are in no way indicative of the next 2-3 years.

I agree with your sentiment by the way, developers should find ways to use LLM’s to improve their development process. But the drama is getting old.

[+] Aeolun|1 year ago|reply
Maybe we should instead do those things when that time actually comes. Premature optimization and all that.
[+] aussieguy1234|1 year ago|reply
By the time AI can replace Human Devs, it'll also be able to replace almost all jobs that involve working on a computer.

That will trigger mass unemployment and our current economic system could collapse and need replacing with a new system, hopefully one where the profits of AI are shared around more equally.

As far as the jobs that could be automated away soon go, I'd say that being a developer is one of the least likely to be automated away in the short term and it may be one of the last to be automated.

[+] Aeolun|1 year ago|reply
> I'm a developer myself (2k+ commits 3yrs in a row)

What a completely weird thing to identify yourself by number of commits. I can make 5k+ commits in 3 days. Does that make me a better developer?

[+] tocs3|1 year ago|reply
I am curious to see how all this plays out. I am not a developer but have played some with chatGPT to write some simple python. It has done an OK job. When it works it is sometime easier than other cut and paste methods. That is a long way from replacing those that know what they are doing (even at a junior level).

On the other hand I remember hearing the old electrical engineers saying "you do not need a microcontroller for that. Just use a resistor and capacitor, or a 555".

As an aside, is my Firefox spell check worse? Are they getting ready to replace it with a LLM?

[+] owenpalmer|1 year ago|reply
> LLMs cannot update the SDK code to V2 response types in the first place

Why not? Isn't it just a matter of adding the new V2 response type to the context window and asking it to update those types in another file?

> LLMs cannot successfully configure a CI action such that I won't have to publish myself

Why can't it do this? I'd like to see your prompt.

> LLMs do not understand the downstream effect of fixing the problem will be updating yournextstore's code

I suppose I don't understand the specifics enough to analyze why this isn't doable with an LLM. However, it seems like the theme is that code almost always relies on other code, and if one part breaks, it's hard to get it agreeing again. To solve this, you could have the LLM agent routinely add changelogs to the context window when making maintenance changes.

If you can't get an LLM to do something, you probably aren't thinking creatively enough.

[+] noobermin|1 year ago|reply
All I'll say is if you for the past few years were chiming in with how this will change everything only to join the rest of us skeptics in the last year or so, you don't get to turn around and pretend you were right all along about the limitations, that AGI was never going to happen with LLMs only, etc, etc. I just wish some of you , who let's be real, just jumped off the crypto bandwagon a few years ago, would do some introspection first.

Btw, if you do agree with OP about development, extend that to art, writing, etc. The same kind of deep domain specific knowledge is required in almost all things that humans do.

[+] segmondy|1 year ago|reply
Well, it's not a matter of if AI will replace human devs, it will.

The question is WHEN and HOW many devs?

Not all devs will be replaced, but how much will be replaced and when will it start?

If you take your AI and your prompt it to build an app, are you not a dev? I believe the same way we have redefined AI and AGI numerous times, we will redefine what it means to be a developer.

[+] ristos|1 year ago|reply
I feel like the hype around LLMs replacing dev work is similar to what happened years ago when they were saying the same thing about scripting languages and WYSIWYG editors like Dreamweaver...

What ended up happening in practice is just that time was saved on grunt work, and really engineers just ended up working on composing and debugging higher level components, optimization, or low level work.

[+] NBJack|1 year ago|reply
Honestly, this opinion is borderline tinfoil hat territory, but I'd like to point out that between the sudden market expectation that fired hundreds of thousands of developers across multiple companies (in a very close time period!) and constant "LLMs that replace humans are just around the corner" marketing drivel, it almost appears as if The Market (tm) wants to really just drive down the costs of developers more than anything else.

Perhaps it's all a completely legit market turn, and I'm the one in the dark. But I continue to be pessimistic that LLMs are truly going to change anything for the better.

[+] majormajor|1 year ago|reply
2020 bubble hiring was weird enough that I mostly throw out the 2020 hiring and the 2022 layoffs both and just look at things compared to 2018/2019. That ramp-up in the 2010s was called a bubble for years even before Covid but has never popped yet. The expected product bar just keeps getting hire instead.
[+] Ekaros|1 year ago|reply
I always question was the firing due to some LLM or changing market conditions namely much higher rates. As higher rates means investments don't need to search for new products and actually companies investing in products with debt get lot more expensive. Thus there is less demand for new products thus less demand from someone to make those.

It could very well be time period where just less software will be produced. And software that is need to make returns at least in medium term already.

[+] onlyrealcuzzo|1 year ago|reply
There's already a type of search that is much better now than 3 years ago.
[+] JTyQZSnP3cQGa8B|1 year ago|reply
It’s not your tinfoil hat IMHO. I see a few managers and non-technical people (IRL and more on LinkedIn) who “study” prompting with the idea that they can replace people with LLMs and cut down costs.

LLMs can be useful in very specific tasks but these people don’t care about it as they don’t have the technical knowledge to see how and why. All they care about is more money for them.

It won’t be limited to developers though if that happens, and with the darkest scenario, UBI will have to happen as you can’t have half the population unemployed roaming the streets.

[+] qntmfred|1 year ago|reply
> The Market (tm) wants to really just drive down the costs of developers more than anything else.

well yeah. this wasn't going to last forever.

in the US especially. Salaries here were bonkers compared to the rest of the global developer workforce. Gotta remember, remote work wasn't common and barely possible at scale not that long ago. COVID put an end to that, and with it the expectation that management hire developers in any particular local geography. Despite all the RTO drama in some organizations, the majority of the economy was always going to cash in on the advantages of hiring developers in other countries, and squeezing even more productivity out of them with AI tooling.

Downward pressure has been a long time coming. Hope y'all enjoyed the ride.

[+] Ferret7446|1 year ago|reply
Senior devs, no. Junior devs? Uh...

I've been seeing the same thing Steve Yegge has with AI. It could very well replace all of the grunt work in software engineering.

[+] mikeocool|1 year ago|reply
People keep saying this — but if I had to give a junior the level direction I give an AI bot everyday, beyond their first few weeks, that person would be on a pip pretty quickly.

Maybe I’m just fortunate, but most places I’ve worked the difference between jr and senior has been the scope they owned and the amount of time spent architecting/reviewing/mentoring vs heads down coding. Seniors are not just handing off grunt work to juniors.

[+] lokar|1 year ago|reply
The grunt work, not the hard work.

I do the hard work of software Eng away from my computer, trying to think clearly about the problem(s).

[+] mattlutze|1 year ago|reply
If teams are hiring Junior developers to give them "grunt work," they're doing something wrong to begin with.
[+] fzeroracer|1 year ago|reply
Juniors are far more valuable to your organization than just grunt work. They offer fresh perspectives from an 'untainted' perspective. They might know about new technology or understand how customers interact with your system better than the senior engineer who has been behind the desk for so long that all they see is the bugs and issues. AI cannot do anything like this, it can only affirm what you ask of it and in the ways you ask it.

Of course many organizations don't like the whole 'training' people thing anymore so it's rough for junior devs out there. Good for my career since I can demand a premium since the pool of experienced engineers will only go down over time, but ultimately bad for software as a whole.

[+] hindsightbias|1 year ago|reply
Given all these news articles are Ai generated I think they’re shouting at clouds now. The Ai knows how to generate clicks better than you do.
[+] dbetteridge|1 year ago|reply
I personally like the idea of a "companion AI", it comes up a lot in video games and sci-fi stories (funnily enough ones where capitalism has been taken out back and shot.)

But the concept of a second brain that you can use to rubber duck concepts, decipher documentation and save your wrists from carpal tunnel when dealing with boilerplate is very useful.

With that all said, as a senior dev my day is only about 50-75% coding (on a good day) and the rest is often meetings/planning boards/bug triage/helping juniors and just plain translating product requirements into meaningful pieces of work for Devs, which I've not had much luck using an LLM to replace.

[+] WheelsAtLarge|1 year ago|reply
I agree but it's important to know that AI is a tool that devs need to understand and incorporate into their daily work life.
[+] al_borland|1 year ago|reply
Is a statement like this any different than someone saying developers needed modern IDEs 20 years ago? Yet we still have people in the industry using vim or emacs, and are successful.

I will occasionally ask Copilot for something when I’m in a hurry, and the there is some minor thing I don’t care about that someone is telling me needs to get done. The other 99% of the time, I’m doing it myself. Any time I reach for AI instead of doing it myself, I’m learning less, growing less, and understanding my code less. Why would anyone want this?

[+] gerdesj|1 year ago|reply
Your comment is rather prescriptive without any working. Why do I need to ... ?

So far, I have managed to work out how to spot a hallucinating ChatGPT derived "organic blog post" but it is bloody annoying.

Back in the day, Linux related queries would generally end up in Gentoo, Arch, Ubuntu, Mint (int al) forums or wikis or perhaps Reddit and co, sometimes it would end up in TLDP by accident. Now we have a plethora of wankery "blogs" that clog up the search returns. Mostly very pretty and mostly following the same old pattern and mostly correctish and wrong at some crucial point.

I have to deal with a lot of pretty complicated and quite niche stuff, for example HA Proxy, Apache Guacamole, on Linux which itself (despite running mostly everything on our lovely planet) is also considered niche.

Windows related queries normally end up with a post on an MS site with a response that suggests that "SFC /SCANNOW" will fix everything from stiffness in the joints to cancer. That has always been the way since around the late noughties, thanks to a rather lax hiring policy becoming the norm in a large part of the world using internet points as a score for hiring. That is understandable and humans being lazy and abrogating responsibility is not a new thing. Now we have multi zillion <currency> things happening that claim intelligence and what looks suspiciously tulip flavoured.

Oh dear!

[+] grayhatter|1 year ago|reply
funny, I recently switched back to vim, and delayed enabling many IDE features such that I don't even use tab complete anymore. I'm actually enjoying writing code again. It's easier to get into the flow state, and generally I find the code I do write to be a higher quality. While the last bit might be completely subjective, and likely a bit of not just sample, but also confirmation bias. I have no interest in any flavor of code complete, or other advanced IDE features. For me, it's seems to be a net loss.

I understand how generative AI works better than most of the SWEs I work with, but I have absolutely no desire to incorporate it into my workflow. I like that I understand how the code I wrote works, but even if I didn't, I wouldn't trade this rediscovered enjoyment for anything, including the "promised" career velocity.

[+] skeptrune|1 year ago|reply
100%, but my mental exhaustion grows every time I see a post claiming Claude or some other LLM can build your whole company
[+] add-sub-mul-div|1 year ago|reply
Not for writing code, no. If you're experienced enough it's going to slow you down.
[+] bigstrat2003|1 year ago|reply
Only if it provides value, which at this point I wouldn't agree it does.
[+] mattl|1 year ago|reply
Nobody needs to incorporate AI into their daily work life.
[+] fzeroracer|1 year ago|reply
Honestly, when I see stuff like this I laugh all the way to the bank knowing how many developers and teams are opening themselves up to massive security holes and/or bugs by trying to incorporate LLMs into their toolsets.

People are just willingly leaving massive landmines across their codebase waiting to blow their feet off and don't even have the experience to know when the code generated by the AI is bad or not.

[+] faangguyindia|1 year ago|reply
80% coders, just make CRUD apps.

Those working in specialized fields will hold on for much longer.

But CRUD apps, apps requiring gluing bunch of APIs with backend and database wil go away.

Most apps do not need much scaling so highly specialized scaling masters aren't really needed

Currently using zed editor, I am blown away by its Ai integration.

Though completion via FIM throut custom LLMs is lacking

And there are other problems like lacking git integration, I prefer vscode for that!

[+] 015a|1 year ago|reply
> Those working in specialized fields will hold on for much longer.

The thing about AI up to this point is that it has replaced, well, very few jobs, but what jobs it seems to be most capable of replacing are extremely counter-intuitive. Eight years ago, everyone thought it would be: self-driving, machines, hard labor, etc. Turns out that stuff is really hard, and the first industries to fall were actually the more creative ones like writing and art.

If you have an intuition for how the next ten years will look, I'd implore you to be open to the reality that there is no way you could have predicted the world-state of 2024 from the perspective of 2019; and that's only 5 years.

Here's the counter-intuitive take that I believe to be true: Specialized coding might fall faster than generalist CRUD apps, if either falls at all. The value in specialized engineering is biased a lot heavier toward knowledge of the specialized thing you know about. Versus, the problems generalist CRUD coders deal with every day aren't actually technical or coding problems; they're business problems, coordination, resource allocation, and politics. AI has demonstrated itself as being pretty good at knowing things, even highly specialized things; it has not demonstrated itself as being very good at taking responsibility for its actions.

[+] swagasaurus-rex|1 year ago|reply
Crud apps can unexpectedly become surprisingly complicated. Take the humble to -do list for example.

So you can create, edit, even delete to-dos, and it’s even hosted online.

* Once a user has a lot of to-do’s, they might want to organize them. You could organize them hierarchically, or with tags, or represented as a graph.

* what if somebody makes an a new to do or an edit while their cell phone is out of service?

* Are multiple users supported? Can users collaborate on a single to-do? Are entire organizations supported?

* what happens if somebody accidentally deletes a to-do? Can they ever get it back?

* What about concurrent edits to the same to-do?

* Can just anybody make to-dos? What if somebody writes a bot to make millions of to-dos on your website?

* If I load up my to-do list when my phone has no service, do I just get a loading spinner?

The perfect to-do app might actually take an expert a year or more to write. Then it still has to generate a profit somehow.

[+] majormajor|1 year ago|reply
CRUD isn't the complexity with CRUD apps. Working with the rest of the business to make sure the processes get built out in ways that make sense and are actually worthwhile is.

So maybe one dev can handle writing five or ten times as many CRUD services as before. The idea that the biz person is gonna get rid of those devs entirely and babysit the coding agents/deploys/etc themselves strikes me as wildly unrealistic, though. Nor is it realistic that SAAS as a market will just dry up because individuals will just have their own coder agents that do everything for them.

Question is if there's going to be an "enough is enough" point feature/change-wise. But I can't think of the last time I worked at a company where the product team and senior management threw up their hands and said "we did it, we finally cleared the entire product development backlog" as opposed to generating ideas at a rate many multiples of the rate of implementation.

The baseline "here's what you can get with a shoestring team" level of output will get higher. But it will also get higher for your competitors. So expectations will just be much higher. So you just gotta move that much faster now.

[+] kordlessagain|1 year ago|reply
So an agentic system can handle git. Would you be willing to use that to manage those processes for you? What about Docker?
[+] selcuka|1 year ago|reply
> But CRUD apps, apps requiring gluing bunch of APIs with backend and database wil go away.

They were already going away with the improvements in no-code tools, Zapier, scaffolding frameworks etc. LLMs are just another automated tool in this context.

[+] kkfx|1 year ago|reply
Anyone who have ever tried to generate code via LLMs know that. The main hype point though it's selling something that can't work but people believe it do to hide a very simple phenomenon: after DECADES of technical possibility and opportunity IT automation timidly became a bit spread, so finally people have discovered they do not need to physically go to a bank to operate with it, since web-banking is here since decades, they discover they do not need an insurance broker for 99% of insurances instead they choose some on-line offer comparison portal, they do not even need an office to do office works since without paper we can WFH.

This obviously means many jobs are not needed anymore. Much of ETL, office frontline etc for instance. But if you tell anyone this is not new, it's decade old stuff you are clearly a bad manager because you could makes things much better much before, of you simply state that most people are ignorant of IT and that's a dramatic problem for the society because if we know IT we can create new jobs, if not we can only dismiss many and so on. So "hej, it's AI!" [and obviously that's new eh]...

Aside the office use of LLM could led to interesting nightmares in infosec terms, so brace yourself...

[+] zubairq|1 year ago|reply
I “wish” AI would replace me so that I could do other stuff than coding
[+] 93po|1 year ago|reply
ive loved the past two weeks of working on a project where cursor is doing 98% of the coding, i have a sweet project at 20x the normal speed i could do it