top | item 43381215

AI Is Making Developers Dumb

173 points| chronicom | 11 months ago |eli.cx

212 comments

order

popularrecluse|11 months ago

"Some people might not enjoy writing their own code. If that’s the case, as harsh as it may seem, I would say that they’re trying to work in a field that isn’t for them."

I've tolerated writing my own code for decades. Sometimes I'm pleased with it. Mostly it's the abstraction standing between me and my idea. I like to build things, the faster the better. As I have the ideas, I like to see them implemented as efficiently and cleanly as possible, to my specifications.

I've embraced working with LLMs. I don't know that it's made me lazier. If anything, it inspires me to start when I feel in a rut. I'll inevitably let the LLM do its thing, and then them being what they are, I will take over and finish the job my way. I seem to be producing more product than I ever have.

I've worked with people and am friends with a few of these types; they think their code and methodologies are sacrosanct. That if the AI moves in there is no place for them. I got into the game for creativity, it's why I'm still here, and I see no reason to select myself for removal from the field. The tools, the syntax, its all just a means to an end.

SirMaster|11 months ago

This is something that I struggle with for AI programming. I actually like writing the code myself. Like how someone might enjoy knitting or model building or painting or some other "tedious" activity. Using AI to generate my code just takes all the fun out of it for me.

yubblegum|11 months ago

> I've tolerated writing my own code for decades.

The only reason I got suckd into this field was because I enjoyed writing code. What I "tolerated" (professionally) was having to work on other people's code. And LLM code is other people's code.

Philip-J-Fry|11 months ago

I've accepted this way of working too. There is some code that I enjoy writing. But what I've found is that I actually enjoy just seeing the thing in my head actually work in the real world. For me, the fun part was finding the right abstractions and putting all these building blocks together.

My general way of working now is, I'll write some of the code in the style I like. I won't trust an LLM to come up with the right design, so I still trust my knowledge and experience to come up with a design which is maintainable and scaleable. But I might just stub out the detail. I'm focusing mostly on the higher level stuff.

Once I've designed the software at a high level, I can point the LLM at this using specific files as context. Maybe some of them have the data structures describing the business logic and a few stubbed out implementations. Then Claude usually does an excellent job at just filling in the blanks.

I've still got to sanity check it. And I still find it doing things which looks like it came right from a junior developer. But I can suggest a better way and it usually gets it right the second or third time. I find it a really productive way of programming.

I don't want to be writing datalayer of my application. It's not fun for me. LLMs handle that for me and lets me focus on what makes my job interesting.

The other thing I've kinda accepted is to just use it or get left behind. You WILL get people who use this and become really productive. It's a tool which enables you to do more. So at some point you've got to suck it up. I just see it as a really impressive code generation tool. It won't replace me, but not using it might.

tyre|11 months ago

> I like to build things, the faster the better.

what's the largest (traffic, revenue) product you've built? quantity >>>> quality of code is a great trade-off for hacking things together but doesn't lend itself to maintainable systems, in my experience.

Have you seen it work to the long term?

senordevnyc|11 months ago

I resonate so strongly with this. I’ve been a professional software engineer for almost twenty years now. I’ve worked on everything from my own solo indie hacker startups to now getting paid a half million per year to sling code for a tech company worth tens of billions. I enjoy writing code sometimes, but mostly I just want to build things. I’m having great fun using all these AI tools to build things faster than ever. They’re not perfect, and if you consider yourself to be a software engineer first, then I can understand how they’d be frustrating.

But I’m not a software engineer first, I’m a builder first. For me, using these tools to build things is much better than not using them, and that’s enough.

ryanackley|11 months ago

I don't think the author is saying it's a dichotomy. Like, you're either a disciple of doing things "ye olde way" or allowing the LLM to do it for you.

I find his point to be that there is still a lot of value in understanding what is actually going on.

Our business is one of details and I don't think you can code strictly having an LLM doing everything. It does weird and wrong stuff sometimes. It's still necessary to understand the code.

moogly|11 months ago

I like coding on private projects at home; that is fun and creative. The coding I get to do at work inbetween waiting for CI, scouring logs, monitoring APM dashboards and reviewing PRs, in a style and abstraction level I find inappropriate is not interesting at all. A type of change that might take 10 minutes at home might take 2 days at work.

lucideer|11 months ago

There's two sides to this:

> "as harsh as it may seem, I would say that they’re trying to work in a field that isn’t for them."

I find this statement problematic for a different reason: we live in a world where minimum wages (if they exist) are lower than living wages & mean wages are significantly lower the point at which well-being indices plateau. In that context calling people out for working in a field that "isn't for them" is inutile - if you can get by in the field then leaving it simply isn't logical.

THAT SAID, I do find the above comment incongruent with reality. If you're in a field that's "not for you" for economic reasons that's cool but making out that it is in fact for you, despite "tolerating" writing code, is a little different.

> I got into the game for creativity

Are you confusing creativity with productivity?

If you're productive that's great; economic imperative, etc. I'm not knocking that as a positive basis. But nothing you describe in your comment would fall under the umbrella of what I consider "creativity".

dfabulich|11 months ago

We've seen this happen over and over again, when a new leaky layer of abstraction is developed that makes it easier to develop working code without understanding the lower layer.

It's almost always a leaky abstraction, because sometimes you do need to know how the lower layer really works.

Every time this happens, developers who have invested a lot of time and emotional energy in understanding the lower level claim that those who rely on the abstraction are dumber (less curious, less effective, and they write "worse code") than those who have mastered the lower level.

Wouldn't we all be smarter if we stopped relying on third-party libraries and wrote the code ourselves?

Wouldn't we all be smarter if we managed memory manually?

Wouldn't we all be smarter if we wrote all of our code in assembly, and stopped relying on compilers?

Wouldn't we all be smarter if we were wiring our own transistors?

It is educational to learn about lower layers. Often it's required to squeeze out optimal performance. But you don't have to understand lower layers to provide value to your customers, and developers who now find themselves overinvested in low-level knowledge don't want to believe that.

(My favorite use of coding LLMs is to ask them to help me understand code I don't yet understand. Even when it gets the answer wrong, it's often right enough to give me the hints I need to figure it out myself.)

danielmarkbruce|11 months ago

LLMs don't create an abstraction. They generate code. If you are thinking about LLMs as a layer of abstraction, you are going to have all kinds of problems.

simonw|11 months ago

Leaky abstractions is a really appropriate term for LLM-assisted coding.

The original "law of leaky abstractions" talked about how the challenge with abstractions is that when they break you now have to develop a mental model of what they were hiding from you in order to fix the problem.

(Absolutely classic Joel Spolsky essay from 22 years ago which still feels relevant today: https://www.joelonsoftware.com/2002/11/11/the-law-of-leaky-a... )

Having LLMs write code for you has a similar effect: the moment you run into problems, you're going to have to make sure you deeply understand exactly what they have done.

bakugo|11 months ago

Those aren't the same thing at all, and you already mentioned why in your comment: leakiness. The higher up you go on the abstraction chain, the leakier the abstractions become, and the less viable it is to produce quality software without understanding the layer(s) below.

You can generally trust that transistors won't randomly malfunction and give you wrong results. You can generally trust that your compiler won't generate the wrong assembly, or that your interpreter won't interpret your code incorrectly. You can generally trust that your language's automatic memory management won't corrupt memory. It might be useful to understand how those layers work anyway, but it's usually not a hard requirement.

But once you reach a certain level of abstraction (usually 1 level above programming language), you'll start running into more and more issues resulting from abstraction leaks that require understanding the layer below to properly fix. Probably the most blatant example of this nowadays are "React developers" who don't know JS/CSS/HTML and WILL constantly be running into issues that they can't properly solve as a result, and are forced to either give up or write the most deranged workarounds imaginable that consist of hundreds of lines of unintelligible spaghetti.

AI is the highest level of abstraction so far, and as a result, it's also the leakiest abstraction so far. You CANNOT write proper functional and maintainable code using an LLM without having at least a decent understanding of what it's outputting, unless you're writing baby's first todo app or something.

skoodge|11 months ago

Not all of those abstractions are equally leaky though. Automatic memory management for example is leaky only for a very narrow set of problems, in many situations the abstraction works extremely well. It remains to be seen whether AI can be made to leak so rarely (which does not meant that it's not useful even in its current leaky state).

notTooFarGone|11 months ago

If we just talk in analogies: a cup is also leaky because fluid is escaping via vapours. It's not the same as a cup with a hole in it.

Llms currently have tiny holes and we don't know if we can fix them. Established abstractions are more like cups that may leak but only in certain conditions (when it's hot)

gtsop|11 months ago

> But you don't have to understand lower layers to provide value to your customers, and developers who now find themselves overinvested in low-level knowledge don't want to believe that.

This is the weakest point which breaks your whole argument.

I see it happening ALL the time: newer web developers enter the field from an angle of high abstraction, whenever these abstractions don't work well, then they are completelly unable to proceed. They wouldn't be in that place if they knew the low-level and it DOES prevent them from delivering "value" to their customers.

What is even worse than that, since these developers don't understand exactly why some problen manifests, and the don't even understand exactly what their abstraction trully solves, they wrongly proceed to solve a problem using the wrong (high level) tools.

keybored|11 months ago

And every time the commentariat dismisses it with the trope that it’s the same as the other times.

It’s not the same as the other times. The naysayers might be the same elitists as the last time. But that’s irrelevant because the moment is different.

It’s not even an abstraction. An abstraction of what? It’s English/Farsi/etc. text input which gets translated into something that no one can vouch for. What does that abstract?

You say that they can learn about the lower layers. But what’s the skill transfer from the prompt engineering to the programming?

People who program in memory-managed languages are programming. There’s no paradigm shift when they start doing manual memory management. It’s more things to manage. That’s it.

People who write spreadsheet logic are programming.

But what are prompt engineers doing? ... I guess they are hoping for the best. Optimism is what they have in common with programming.

101008|11 months ago

> (My favorite use of coding LLMs is to ask them to help me understand code I don't yet understand. Even when it gets the answer wrong, it's often right enough to give me the hints I need to figure it out myself.)

Agree, specially useful when you join a new company and you have to navegate a large codebase (or bad-maintained codebase, which is even worse by several orders of magnitude). I had no luck asking LLM to fix this or that, but it did mostly OK when I asked how it works and what the code is trying to code (it includes mistakes but that's fine, I can see them, which is different if it was just code that I copy and paste).

happytoexplain|11 months ago

This "it's the same as the past changes" analogy is lazy - everywhere it's reached for, not just AI. It's basically just "something something luddites".

Criticisms of each change are not somehow invalid just because the change is inevitable, like all the changes before it.

notarobot123|11 months ago

When a higher level of abstraction allows programmers to focus on the detail relevant to them they stop needing to know the low level stuff. Some programmers tend not to be a fan of these kinds of changes as we well know.

But do LLMs provide a higher level of abstraction? Is this really one of those transition points in computing history?

If they do, it's a different kind to compilers, third-party APIs or any other form of higher level abstraction we've seen so far. It allows programmers to focus on a different level of detail to some extent but they still need to be able to assemble the "right enough" pieces into a meaningful whole.

Personally, I don't see this as a higher level of abstraction. I can't offload the cognitive load of understanding, just the work of constructing and typing out the solution. I can't fully trust the output and I can't really assemble the input without some knowledge of what I'm putting together.

LLMs might speed up development and lower the bar for developing complex applications but I don't think they raise the problem-solving task to one focused solely on the problem domain. That would be the point where you no longer need to know about the lower layers.

vishalontheline|11 months ago

Last year I learned a new language and framework for the first time in a while. Until I became used to the new way of thinking, the discomfort I felt at each hurdle was both mental and physical! I imagine this is what many senior engineers feel when they first begin using an AI programming assistant, or an even more hands-off AI tool.

Oddly enough, using an AI assistant, despite it guessing incorrectly as often as it did, helped me learn and write code faster!

jaredklewis|11 months ago

I think it’s usually helpful if your knowledge extends a little deeper than the level you usually work at. You need to know a lot about the layer you work in, a good amount about the surrounding layers, and maybe just a little bit about more distant layers.

If you are writing SQL, it’s helpful to understand how database engines manage storage and optimize queries. If you write database engine code, it’s helpful to understand (among many other things of course) how memory is managed by the operating system. If you write OS code, it’s helpful to understand how memory hardware works. And so on. But you can write great SQL without knowing much of anything about memory hardware.

The reverse is also true in that it’s good to know what is going on one level above you as well.

Anyway my experience has been that knowledge of the adjacent stack layers is highly beneficial and I don’t think exaggerated.

djaouen|11 months ago

When a lower layer fails, your ability to remedy a situation depends on your ability to understand that layer of abstraction. Now: if an LLM produces wrong code, how do you know why it did that?

GoblinSlayer|11 months ago

Understanding assembler (or even just ABI) if useful.

Sparkyte|11 months ago

Let me alter this perspective, you can use it to learn why parts of code does and use it for commenting. Help you read what might be otherwise unreadable. LLMs and programming is good, but not great. However it can easily be someone to teach a developer what parts of the code they are working on.

yubblegum|11 months ago

The difference between a mechanic and an engineer, illustrated ..

yapyap|11 months ago

Sounds like you’re coping cause you have some type of investment in LLM “coding”, whether that is financial or emotional.

I won’t waste my time too much reacting to this nonsensical comment but I’ll just give this example, LLMs can hallucinate, where they generate code that’s not real, LLMs don’t work off straight rules, they’re influenced by a seed. Normal abstraction layers aren’t.

I dearly hope you’re arguing in bad faith, otherwise you are really deluded with either programming terms or reality.

intelVISA|11 months ago

Abstraction is fine, it allows you to work faster, or easier. Reliance that becomes dependency is the problem - when abstraction supersedes fundamentals you're no longer able to reason about the leaks and they become blindspots.

Don't confuse low-level tedium with CS basics, if you're arguing that knowing how computers work is not relevant to working as a SWE then sure, but why would a company want a software dev that doesn't seem to know software? Usually your irreplaceable value as a developer is knowing and mitigating the leaks so they don't wind up threatening the business.

This is where the industry most suffers from not having a standardized-ish hierarchy, you're right that most shops don't need a trauma surgeon on-call for treating headaches but there's still many medical options before resorting to random grifter who simply "watched some Grey's Anatomy" as "medschool was a barrier for providing value to customers".

MarcelOlsz|11 months ago

I've had a similar experience. I built out a feature using an LLM and then found the library it must have been "taking" the code from, so what I ended up was a much worse mangled version of what already existed, had I taken the time to properly research. I've now fully gone back to just getting it to prototype functions for me in-editor based off comments, and I do the rest. Setting up AI pipelines with rule files and stuff takes all the fun away and feels like extremely daunting work I can't bring myself to do. I would much rather just code than act as a PM for a junior that will mess up constantly.

When the LLM heinously gets it wrong 2, 3, 4 times in a row, I feel a genuine rage bubbling that I wouldn't get otherwise. It's exhausting. I expect within the next year or two this will get a lot easier and the UX better, but I'm not seeing how. Maybe I lack vision.

switchbak|11 months ago

You’re exactly right on the rage part, and that’s not something I’ve seen discussed enough.

Maybe it’s the fact that you know you could do it better in less time that drives the frustration. For a junior dev, perhaps that frustration is worth it because there’s a perception that the AI is still more likely to be saving them time?

I’m only tolerating this because of the potential for long term improvement. If it just stayed like it is now, I wouldn’t touch it again. Or I’d find something else to do with my time, because it turns an enjoyable profession into a stressful agonizing experience.

rvense|11 months ago

Is it just me or has this been a year or two off for at least a year or two now?

jll29|11 months ago

LLMs also take away the motivation from students to properly concentrate and deeply understand a technical problem (including but not limited to coding problems); instead, they copy, paste and move on without understanding. The electronic calculator analogy might be appropriate: it's a tool appropriate once you have learned how to do the calculations by hand.

In an experiment (six months long, twice repeated, so a one-year study), we gave business students ChatGPT and a data science task to solve that they did not have the background for (develop a sentiment analysis classifier for German-language recommendations of medical practices). With their electronic "AI" helper, they could find a solution, but the scary thing is they did not acquire any knowledge on the way, as exist interviews clearly demonstrated.

As a friend commented, "these language models should never have been made available to the general public", only to researchers.

simonw|11 months ago

> As a friend commented, "these language models should never have been made available to the general public", only to researchers.

That feels to me like a dystopian timeline that we've only very narrowly avoided.

It wouldn't just have been researchers: it would have been researchers and the wealthy.

I'm so relieved that most human beings with access to an internet-connected device have the ability to try this stuff and work to understand what it can and cannot do themselves.

tokinonagare|11 months ago

I'm giving a programming class and students uses LLMs all the time. I see it as a big problem because:

- it puts focus on syntax instead of the big picture. Instead of finding articles or posts on Stack explaining things beyond how to write them. AI give them the "how" so they don't think of the "why"

- students almost don't ask questions anymore. Why would they when an AI give them code?

- AI output contains notions, syntax and API not seen in class, adding to the confusion

Even the best students have a difficult time answering basic questions about what have been seen on the last (3 hours) class.

butterlettuce|11 months ago

I wish I had an LLM as a student because I couldn’t afford a tutor and googling for information was tedious.

It’s the college’s responsbility now to teach students how to harness the power of LLMs effectively. They can’t keep their heads in the sand forever.

currymj|11 months ago

it's particularly bad for students who should be trying to learn.

at the same time in my own life, there are tasks that I don't want to do, and certainly don't want to learn anything about, yet have to do.

For example, figuring out a weird edge case combination of flags for a badly designed LaTeX library that I will only ever have to use once. I could try to read the documentation and understand it, but this would take a long time. And, even if it would take no time at all, I literally would prefer not to have this knowledge wasting neurons in my brain.

thierrydamiba|11 months ago

What do you think is the big difference between these tools and calculators?

maratc|11 months ago

A personal anecdote from my previous place:

A junior developer was tasked with writing a script that would produce a list of branches that haven't been touched for a while. I've got the review request. The big chunk of it was written in awk -- even though many awk scripts are one-liners, they don't have to be -- and that chunk was kinda impressive, making some clever use of associative arrays, auto-vivification, and more pretty advanced awk stuff. In fact, it was actually longer than any awk that I have ever written.

When I asked them, "where did you learn awk?", they were taken by surprise -- "where did I learn what?"

Turns out they just fed the task definition to some LLM and copied the answer to the pull request.

simonw|11 months ago

I wonder if it would work to introduce a company policy that says you should never commit consensus aren't able to explain how it works?

I've been using that as my own personal policy for AI-assisted code and I am finding it works well for me, but would it work as a company company policy thing?

luckylion|11 months ago

One of the advantages of working with people who are not native english-speakers is that, if their english suddenly becomes perfect and they can write concise technical explanations in tasks, you know it's some LLM.

Then if you ask for some detail on a call, it's all uhm, ehm, ehhh, "I will send example later".

jtwaleson|11 months ago

Plato, in the Phaedrus, 370BC: "They will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves, but by means of external marks."

tarkin2|11 months ago

And our memory has declined. He was right

nextts|11 months ago

Good job he wrote that down.

karaterobot|11 months ago

> I got into software engineering because I love building things and figuring out how stuff works. That means that I enjoy partaking in the laborious process of pressing buttons on my keyboard to form blocks of code.

I think this is a mistake. Building things and figuring out how stuff works is not related to pressing buttons on a keyboard to form blocks of code. Typing is just a side effect of the technology used. It's like saying that in order to be a mathematician, you have to enjoy writing equations on a whiteboard, or to be a doctor you must really love filling out EHR forms.

In engineering, coming up with a solution that fits the constraints and requirements is typically the end goal, and the best measure of skill I'm aware of. Certainly it's the one that really matters the most in practice. When it is valuable to type everything by hand, then a good engineer should type it by hand. On the other hand, if the best use of your time is to import a third-party library, do that. If the best solution is to create a code base so large no single human brain can understand it all, then you'd better do that. If the easiest path to the solution is to offload some of the coding to an LLM, that's what you should do.

mahoro|11 months ago

> There is a concept called “Copilot Lag”. It refers to a state where after each action, an engineer pauses, waiting for something to prompt them what to do next.

I've been experiencing this for 10-15 years. I type something and then wait for IDE to complete function names, class methods etc. From this perspective, LLM won't hurt too much because I'm already dumb enough.

snickerbockers|11 months ago

It's really interesting how minor changes in your workflow can completely wreck productivity. When I'm at work I spend at least 90% of my time in emacs, but there are some programs I'm forced to use that are only available via Win32 GUI apps, or cursed webapps. Being forced to abandon my keybinds and move the mouse around hunting for buttons to click and then moving my hand from the mouse to the keyboard then back to the mouse really fucks me up. My coworkers all use MSVC and they don't seem to mind it all because they're used to moving the mouse around all the time; conversely a few of them actually seem to hate command-driven programs the same way I hate GUI-driven programs.

As I get older, it feels like every time I have to use a GUI I get stuck in a sort of daze because my mind has become optimized for the specific work I usually do at the expense of the work I usually don't do. I feel like I'm smarter and faster than I've ever been at any prior point in my life, but only for a limited class of work and anything outside of that turns me into a senile old man. This often manifests in me getting distracted by youtube, windows solitaire, etc because it's almost painful to try to remember how to move the mouse around though all these stupid menus with a million poorly-documented buttons that all have misleading labels.

SoftTalker|11 months ago

This is the reason I don’t use auto completing IDEs. Pretty much vanilla emacs. I do often use syntax highlighting for the language, but that’s the limit of the crutches I want to use.

wilburTheDog|11 months ago

An LLM is a tool. It's your choice how you use it. I think there are at least two ways to use it that are helpful but don't replace your thinking. I sometimes have a problem I don't know how to solve that's too complex to ask google. I can write a paragraph in ChatGPT and it will "understand" what I'm asking and usually give me useful suggestions. Also I sometimes use it to do tedious and repetitive work I just don't want to do.

I don't generally ask it to write my code for me because that's the fun part of the job.

Bukhmanizer|11 months ago

I think the issue is that a lot of orgs are systematically using the tool poorly.

I’m responsible for a couple legacy projects with medium sized codebases, and my experience with any kind of maintenance activities has been terrible. New code is great, but asking for fixes, refactoring, or understanding the code base has had an essentially 2% success rate for me.

Then you have to wonder, how the hell orgs expect to maintain/scale and more code from fewer devs, who don’t even understand how the original code worked?

LLMs are just a tool but overreliance on them is just as much of a code smell as - say - deciding your entire backend is going to be in Matlab; or all your variables are going to be global variables - you can do it, but I guarantee that it’s going to cause issues 2-3 years down the line.

moribvndvs|11 months ago

I am at the point of abandoning coding copilots because I spend most of my time fighting the god damned things. Surely, some of this is on me, not tweaking settings or finding the right workflow to get the most of it. Some of it is problematic UX/implementation in VSCode or Cursor. But the remaining portion is an assortment of quirks that require me to hover over it like an overattentive parent trying to keep a toddler from constantly sticking its fingers in electrical sockets. All that plus the comparatively sluggish and inconsistent responsivity is fucking exhausting and I feel like I get _less_ done in copilot-heavy sessions. Up to a point they will improve over time, but right now it makes programming less enjoyable for me.

On the other hand, I am finding LLMs increasingly useful as a moderate expert on a large swath of subjects available 24/7, who will never get tired of repeated clarifications, tangents, and questions, and who can act as an assistant to go off and research or digest things for you. It’s mostly decent rubber duck.

That being said, it’s so easy to land in the echo chamber bullshit zone, and hitting the wall where human intuition, curiosity, ingenuity, and personality would normally take hold for even a below average person is jarring, deflating, and sometimes counterproductive, especially when you hit the context window.

I’m fine with having it as another tool in the box, but I rather do the work myself and collaborate with actual people.

agumonkey|11 months ago

It's also making the sleazy and lazy one thrive a bit more, which is quite painful when passionated devs which are also great colleagues don't gain any real leverage from chatgpt.

bobxmax|11 months ago

Humble craftsmen have long been getting replaced by automation and technology. Devs are resisting the same way as everyone else did before them but it's futile.

It's just especially poignant/painful because developers are being hoisted by their own petard, so to speak.

Cyclone_|11 months ago

I use LLMs for generating small chunks of code (less than 150 lines) but I am of the opinion that you should always understand what generated cide is doing. I take time go read through it and make sure it makes sense before I actually run it. I've found that for smaller chunks of code it's usually pretty accurate on the first try. Occasionally it can't figure it out all all, even with trying to massage the prompt to be more descriptive.

Velorivox|11 months ago

I use Claude Sonnet to generate large chunks of code, practically as a form of macro expansion. Such as when adapting SQL queries to a new migration, or adding straightforward UI. Even still, it sometimes isn’t great and I would never commit anything without carefully observing what it actually wrote. More importantly, I never ask it to do something I myself don’t know how to do, especially if I suspect a library or best practice exists.

In other words, I treat it exactly like stochastic autocomplete. It makes me lazier, I’m sure, but the first part of the article above is a rant against a tautology: any tool worth using ought to be missed by the user if they stopped using it!

BooneJS|11 months ago

If you use LLMs in lieu of searching Stack Overflow, you're going to go faster and be neither smarter nor dumber. If you're prompting for entire functions, I suspect it'll be a crutch you learn to rely on forever.

flutas|11 months ago

Personally I think there's a middle ground to be had there.

I use LLMs to write entire test functions, but I also have specs for it to work from and can go over what it wrote and verify it. I never blindly go "yeah this test is good" after it generates it.

I think that's the middle ground, knowing where, and when it can handle a full function / impl vs a single/multi(short) line auto-completion.

Guthur|11 months ago

I'm in full agreement with this, and it's part of the reason I'm considering leaving the software engineering field for good.

I've been programming for over 25 years, and the joy I get from it is the artistry of it, I see beauty in systems constructed in the abstract realm. But LLM based development remove much of that. I haven't used nor desire to use LLM for this, but I don't want to compete with people that do because I won't win in the short-term nature of corporate performance based culture. And so I'm now searching for careers that will be more resistant to LLM based workflows. Unfortunately in my opinion this pretty much rules out any knowledge based economy.

yhoots|11 months ago

Most code is unoriginal boilerplate that serves a business need. LLMs are very good at generating output that is 60-90% of the way there.

palmotea|11 months ago

That's probably the mechanism by which AI will take over many jobs:

1. Skilled people do a good job, AI does a not-so-good job.

2. AI users get dumbed down so they can't do any better. Mediocrity normalized.

3. Replace the AI users with AI.

bee_rider|11 months ago

It’s always been very weird that little hobbyist open source projects produce much better software than billions dollar companies. But I guess it will be even more notable now that the billion dollar garbage shoveling companies are getting self-operating shovels.

Cyclone_|11 months ago

In this scenario, if AI does a not so good job, there will still be good developers left to code.

sys64739|11 months ago

It ruined my friend's startup. Junior dev "wrote" WAY too much code with no ability to support it after the fact. Glitches in production would result in the kid disappearing for weeks at a time because he had no idea how anything actually worked under the hood. Friend was _so_ confident of his codebase before shit hit the fan - the junior dev misrepresented the state of the world, b/c he simply didn't know what he didn't know.

colonelspace|11 months ago

Sounds like your friend ruined his own startup by employing only a junior dev?

jazzcomputer|11 months ago

I'm learning javascript as my first programming language and I'm somewhere around beginner/intermediate. I used Chatgpt for a while, but stopped after a time and just mostly use documentation now. I don't want code solutions, I want code learning and I want certainty behind that learning.

I do see a time where I could use copilot or some LLM solution but only for making stuff I understand, or to sandbox high level concepts of code approaches. Given that I'm a graphic designer by trade, I like 'productivity/automation' AI tools and I see my approach to code will be the same - I like that they're there but I'm not ready for them yet.

I've heard people say I'll get left behind if I don't use AI, and that's fine as I'll just use niche applications of code alongside my regular work as it's just not stimulating to have AI fill in knowledge blanks and outsource my reasoning.

feverzsj|11 months ago

Tried several times for C++, almost always got nonsense results. Maybe they only work for weakly typed language.

mrweasel|11 months ago

Nope, also pretty shitty for Python, at least that's my experience from my rather limited usage. I might be using it wrong though.

The problem is that the LLM won't find design mistakes. E.g. trying to get the value of a label in Textual, you can technically do it, but you're not really suppose to. The variable starts with an underscore, so that's an indication that you shouldn't really touch it. The LLMs will happily help you attempt to use a non-existing .text attribute, then start running circles, because what you're doing is a design mistake.

LLMs a probably fairly helpful for situations where the documentation is lacking, but simple auto-complete is also working well enough.

jfcwu|11 months ago

[deleted]

Kiro|11 months ago

I also love building things. LLM-assisted workflows have definitely not taken this away. If anything, it has only amplified my love for coding. I can finally focus on the creative parts only.

That said, the author is probably right that it has made me dumber or at least less prolific at writing boilerplate.

moffkalast|11 months ago

> Over time, I started to forget basic foundational elements of the languages I worked with. I started to forget parts of the syntax, how basic statements are used

It's a good thing tbh. Language syntax is ultimately entirely arbitrary and is the most pointless thing to have to keep in mind. Why bother focusing on that when you can use the mental effort on the actual logic instead?

This has been a problem for me for years before LLMs, constantly switching languages and forgetting what exact specifics I need to use because everyone thinks their super special way of writing the same exact thing is best and standards are avoided like the plague. Why do we need two hundred ways of writing a fuckin for loop?

righthand|11 months ago

I told my colleagues that if they’re just going to send me LLM code I cannot review it and assume they already double checked the work themselves. This gives them instant approval and if they want to spend time submitting follow up PRs because they’re not double checking their code and not understanding then they can do that. I honestly did this for two reasons:

1. The problem domain is a marketing site (low risk)

2. I got tired of fixing bad LLM code

I have noticed the people who do this are caught up in the politics at work and not really interested in writing code.

I have no desire to be a code janitor.

casey2|11 months ago

This entire line of reasoning is worker propaganda. Like the boss is some buffoon and the employees constantly have to skirt his nonsensical requirements to make create a reasonable product.

It's a cartoon mentality. Real products have more requirements than any human can fathom, correctness is just one of the uncountable tradeoffs you can make. Understanding, or some kind of scientific value is another.

If anything but a single minded focus on your pet requirement is dumb, then call me dumb idc. Why YOU got into software development is not why anyone else did.

TrackerFF|11 months ago

Wonder if we'll have this discussion in 20 years. Or will traditional programmers be some niche "artisanal" group of workers, akin to what bootmakers and bespoke tailors are today.

GoblinSlayer|11 months ago

I'm pretty sure I saw the word "programming" being used as a synonym for webdev.

kennysoona|11 months ago

Gen Z kind of already have a reputation for being 'dumb', being unable to supposedly do basic tasks expected from an entry level office working, or questioning basic things like why tasks get delegated down the chain. Maybe being bad at coding, especially if they are using AI, is just part of that?

I heard about the term 'vibe coding' recently, which really just means copying and pasting code from an AI without checking it. It's interesting that that's a thing, I wonder how widespread it is.

deeviant|11 months ago

> Some people might not enjoy writing their own code. If that’s the case, as harsh as it may seem, I would say that they’re trying to work in a field that isn’t for them

Conversely: Some people want to insist that writing code 10x slower is the right way to do things, that horses were always better, more dependable than cares, and that nobody would want to step into one of those flying monstrosities. And they may also find that they are no longer in the right field.

grandempire|11 months ago

This is the new technology is always better argument that invoked the imagery of all the times I was true and ignores all the products that have been disposed of.

The truth is it depends on every detail. What technology. For who. When.

bigfishrunning|11 months ago

> Some people want to insist that writing code 10x slower is the right way to do things

If the code has to be correct, then this is right

bee_rider|11 months ago

Wait, let’s give it a couple years, the way Boeing is going the horse people might have had a point. I’m not 100% sold on the idea that our society will long-term be capable of maintaining the infrastructure required to do stuff like airplanes.

tracerbulletx|11 months ago

If you want AI to make you less dumb, instead of using it like stack overflow, you can go on a road trip and have a deep conversation about a topic or field you want to learn more about, you can have it quiz you, do mock interviews, ask questions, have a chat, its incredible at that. As long as its not something where the documentation is less than a year or two old.

chasing|11 months ago

AI tools are great. They don’t absolve you from understanding what you’re doing and why.

One of the jobs of a software engineer is to be the point person for some pieces of technology. The responsible person in the chain. If you let AI do all of your job, it’s the same as letting a junior employee do all of your job: Eventually the higher-ups will notice and wonder why they need you.

falcor84|11 months ago

> As they’re notorious for making crap up because, well, that’s how LLMs work by design, it means that they’re probably making up nonsense half the time.

I found this to be such a silly statement. I find arguments generated by AI to significantly more solid than this.

Centigonal|11 months ago

I think "AI makes developers dumb" makes as much sense as "becoming a manager makes developers dumb."

I was an engineer before moving to more product and strategy oriented roles, and I work on side projects with assistance from Copilot and Roo Code. I find that the skills that I developed as a manager (like writing clear reqs, reviewing code, helping balance tool selection tradeoffs, researching prior art, intuiting when to dive deep into a component and when to keep it abstract, designing system architectures, identifying long-term-bad ideas that initially seem like good ideas, and pushing toward a unified vision of the future) are sometimes more useful for interacting with AI devtools than my engineering skillset.

I think giving someone an AI coding assistant is pretty bad for having them develop coding skills, but pretty good for having them develop "working with an AI assistant" skills. Ultimately, if the result is that AI-assisted programmers can ship products faster without sacrificing sustainability (i.e. you can't have your codebase collapse under the weight of AI-generated code that nobody understands), then I think there will be space in the future for both AI-power users who can go fast as well as conventional engineers who can go deep.

minimaxir|11 months ago

What modern LLMs are good at is reducing boilerplate for workflows that are annoying and tedious, but b) genuinely save time b) are less likely for a LLM to screw up c) are easy to spot check and identify issues in the event the LLM does mess up.

For example, in one of my recent blog posts I wanted to use Python's Pillow to composite five images: one consisting of the left half of the image, the other four in quadrants (https://github.com/minimaxir/mtg-embeddings/blob/main/mtg_re...). I know how to do that in PIL (have to manually specify the coordinates and resize images) but it is annoying and prone to human error and I can never remember what corner is the origin in PIL-land.

Meanwhile I asked Claude 3.5 Sonnet this:

   Write Python code using the Pillow library to compose 5 images into a single image:

   1. The left half consists of one image.
   2. The right half consists of the remaining 4 images, equally sized with one quadrant each
And it got the PIL code mostly correct, except it tried to load the images from a file path which wasn't desired, but it is both an easy fix and my fault since I didn't specify that.

Point (c) above is also why I despise the "vibe coding" meme because I believe it's intentionally misleading, since identifying code and functional requirement issues is an implicit requisite skill that is intentionally ignored in hype as it goes against the novelty of "an AI actually did all of this without much human intervention."

knallfrosch|11 months ago

Just a generic rant. How many people can sew; fell a tree; or skin an animal? Yeah, I thought so.

And no data or link to data either. Just a waves hand "I think it happened to me"

xpl|11 months ago

People said the same thing about IntelliSense a long time ago.

qayxc|11 months ago

There's a difference in quality, though. IntelliSense was never meant to be more than autocomplete or suggestions (function names, variables names, etc.), i.e. the typing out and memorizing API calls and function signatures part. LLMs in the context of programming are tools that aim to replace the thinking part. Big difference.

I don't need to remember all functions and their signatures for APIs I rarely use - it's fine if a tool like IntelliSense (or an ol' LSP really) acts like a handy cheat sheet for these. Having a machine auto-implement entire programs or significant parts of them, is on another level entirely.

nialv7|11 months ago

Or maybe AI is enabling dumb people to program?

bflesch|11 months ago

Or it makes dumb people become developers ;)

atomic128|11 months ago

Here is a disturbing look at what the absolute knobs at Y Combinator (and elsewhere) are preaching/pushing, with commentary from Primeagen: https://www.youtube.com/watch?v=riyh_CIshTs

Watch the whole thing, it's hilarious. Eventually these venture capitalists are forced to acknowledge that LLM-dependent developers do not develop an understanding and hit a ceiling. They call it "good enough".

The use of LLMs for constructive activities (writing, coding, etc.) rapidly produces a profound dependence. Try turning it off for a day or two, you're hobbled, incapacitated. Competition in the workplace forces us down this road to being utterly dependent. Human intellect atrophies through disuse. More discussion of this effect, empirical observations: https://www.youtube.com/watch?v=cQNyYx2fZXw

To understand the reality of LLM code generators in practice, Primeagen and Casey Muratori carefully review the output of a state-of-the-art LLM code generator. They provide a task well-represented in the LLM's training data, so development should be easy. The task is presented as a cumulative series of modifications to a codebase: https://www.youtube.com/watch?v=NW6PhVdq9R8

This is the reality of what's happening: iterative development converging on subtly or grossly incorrect, overcomplicated, unmaintainable code, with the LLM increasingly unable to make progress. And the human, where does he end up?

p0nce|11 months ago

But this is exactly what the higher ups want according to Braverman, they will insist on "know-how" being non-existent, and always push to tell workers what they - of course - know of the work, that we peons would ignore.

IshKebab|11 months ago

"Calculators are making people dumb"

"Spell checkers are making people dumb"

"Wikipedia is making people dumb"

Nothing to see here.

qayxc|11 months ago

> "Calculators are making people dumb"

Wrong quote - "calculators are making people bad at doing maths" was the fear. Truns out, they didn't, but didn't help either [1]

> "Spell checkers are making people dumb"

Well, at this point I assume you use "dumb" as a general stand-in for "worse at the skill in question". here, however, research shows that indeed, spell checkers and auto-correct seem to have a negative influence on learning proper spelling and grammar [2]. The main takeaway here seems to be the fact that handwriting in particular is a major contributor in learning and practicing written language skills. [2]

> "Wikipedia is making people dumb"

Honestly, haven't heard that one before. Did you just make that up? Apart from people like Plato, thousands of years ago, owning and using books, encyclopaedias, and dictionaries has generally been viewed as a sign of a cultured and knowledgeable individual in many cultures... I don't see how an online source is any different in that regard.

The decline of problem solving and analytical skills, short attention spans, lack of foundational knowledge and subsequent loss of valuable training material for our beloved stochastical parrots, though, might prove to become a problem in future.

There's a qualitative difference between relying on spell checkers while still knowing the words and slowly losing the ability to formulate, express, and solve problems in an analytical fashion. Worst case we're moving towards E.M. Forster's dystopian "The Machine Stops"-scenario.

[1] https://www.jstor.org/stable/42802150?seq=1#page_scan_tab_co...

[2] https://www.researchgate.net/publication/362696154_The_Effec...

GoblinSlayer|11 months ago

People, who know how to write, use spell checkers as assistants. People, who don't know how to write, use spell checkers to do everything for them effectively replacing one errors with other errors.

betimsl|11 months ago

> [...] This is to the point where is starts to become hard for you to work without one.

Why would one work without one?

tarkin2|11 months ago

Do human servants make you lazier or more productive? (A sincere thought experiment)

captainclam|11 months ago

This is one of the many many experiences in the tapestry of people figuring out how to use this new tool.

There will be many such cases of engineers losing their edge.

There will be many cases of engineers skillfully wielding LLMs and growing as a result.

There will be many cases of hobbyists becoming empowered to build new things.

There will be many cases of SWEs getting lazy and building up huge, messy, intractable code bases.

I enjoy reading from all these perspectives. I am tired of sweeping statements like "AI is Making Developers Dumb."

Frederation|11 months ago

*Inexperienced devs using tools to think for them instead of problem solving.

hbogert|11 months ago

Everything is making us dumb. I remember when ATMs would give out your money before giving back your card. You would often find someone's card and maybe you could still shout to them if you saw them walking away.

Back then you'd giggle about how silly that person was, you wouldn't forget your card would you? Somewhere since then the mindset shifted and if a machine would allow for this to happen everybody would agree the designers of the machine did not do a good job on the user-experience.

This is just a silly example, but through everyday life everything has become streamlined and you can just cruise through a day on auto-pilot and machines will autocorrect you or the process how to use them makes it near impossible to get into a anomalous state. Sometimes I do have the feeling all this made us 'dumber' and I don't actively think anymore when interfacing with things because I assume it's foolproof.

However, not having to actively think about every little thing when interfacing with systems does give a lot of free mental capacity to be used for other things.

When reading these things I always get the feeling it's simply a "kids these days" piece. Go back 40 years when hardly anybody would use punch cards anymore. I'd imagine there were a lot of "real" developers who advocated that "kids" are wasting CPU cycles and memory because they've lost touch with the hardware and if they simply kept using punchcards they'd get a sense of "real" programming again.

My takeaway is, if we expect our ATMs to behave sane and keep us from doing dumb things, why wouldn't we expect at least a subset of developers wanting to get that same experience during development?

lowbloodsugar|11 months ago

Honestly I just don’t remember the names of methods and without my IDE I’d be a lot less productive than I am now. Are IDEs a problem?

The bit about “people don’t really know how things work anymore”: my friend I grew up programming in assembly, I’ve modified the kernel on games consoles. Nobody around me knocking out their C# and their typescript has any idea how these things work. Like I can name the people in the campus that do.

LLMs are a useful tool. Learn to use them to increase your productivity or be left behind.

Sparkyte|11 months ago

I dont think it is making developers dumb, you still need to audit and review the code. As long as you augment your writing by relying on base templating, finding material to read or have it explain code. It is really good.

mulmen|11 months ago

AI lowers the bar. You can say Python makes developers dumb too. Or that canned food makes cooks dumb. That’s not really the point though. When something is easier more people can do it. That expansion is biased downward.

jas39|11 months ago

Frankly, i don't think this is true at all. If anything I notice, for me, that I take better and more informed decisions, in many aspects of life. Think this criticism comes from a position of someone having invested alot of time in something AI can do quite well.

Etheryte|11 months ago

For me, the main question in this context would be whether the decisions are better informed or they just feel better informed. I regularly get LLMs to lie to me in my areas of expertise, but there I have the benefit that I can usually sniff out the lie. In topics I'm not that familiar with, I can't tell whether the LLM is confidently correct or confidently incorrect.

tehjoker|11 months ago

is crazy to me how people talk about aeons ago when these tool came out like two years ago

gdubs|11 months ago

I live on a farm and there are a lot of things that machines can do faster and cheaper. And for a lot of tasks, it makes more sense from a time / money tradeoff.

But I still like to do certain things by hand. Both because it's more enjoyable that way, and because it's good to stay in shape.

Coding is similar to me. 80% of coding is pretty brain dead — boilerplate, repetitive. Then there's that 20% that really matters. Either because it requires real creativity, or intentionality.

Look for the 80/20 rule and find those spots where you can keep yourself sharp.

cadamsdotcom|11 months ago

AI makes developers smarter when used in smart ways. How amazing to have code generated for you, freeing you to consider the next task (ie. “sit there waiting for the next task to come to mind”) .. oh, by the way, if you don’t understand the code, highlight it and ask for an explanation. Repeat ad infinitum until you understand what you’re reading.

The dumb developers are those resisting this amazing tool and trend.

kelseyfrog|11 months ago

Books made orators dumb. I'm not sure this argument has ever had any credence, not now and not when Socrates came up with his version for his time.

Any technology that renders a mental skill obsolete will undergo this treatment. We should be smart enough to recognize the rhetoric it is rather than pretend it's a valid argument for Luddism.

rvogler|11 months ago

"There’s a reason behind why I say this. Over time, you develop a reliance on [search engines]. This is to the point where is [sic!] starts to become hard for you to work without one."

annjose|11 months ago

I experimented with vibe coding [0] yesterday to build a Pomodoro timer app [1] and had a mixed experience.

The process - instead of typing code, I mostly just talked (voice commands) to an AI coding assistant - in this case, Claude Sonnet 3.7 with GitHub Copilot in Visual Studio Code and the macOS built-in Dictation app. After each change, I’d check if it was implemented correctly and if it looked good in the app. I’d review the code to see if there are any mistakes. If I want any changes, I will ask AI to fix it and again review the code. The code is open source and available in GitHub [2].

On one hand, it was amazing to see how quickly the ideas in my head were turning into real code. Yes reviewing the code take time, but it is far less than if I were to write all that code myself. On the other hand, it was eye-opening to realize that I need to be diligent about reviewing the code written by AI and ensuring that my code is secure, performant and architecturally stable. There were a few occasions when AI wouldn't realize there is a mistake (at one time, a compile error) and I had to tell it to fix it.

No doubt that AI assisted programming is changing how we build software. It gives you a pretty good starting point, it will take you almost 70-80% there. But a production grade application at scale requires a lot more work on architecture, system design, database, observability and end to end integration.

So I believe we developers need to adapt and understand these concepts deeply. We’ll need to be good at:

  - Reading code - Understanding, verifying and correcting the code written by AI
  - Systems thinking - understand the big picture and how different components interact with each other
  - Guiding the AI system - giving clear instructions about what you want it to do
  - Architecture and optimization - Ensuring the underlying structure is solid and performance is good
  - Understand the programming language - without this, we wouldn't know when AI makes a mistake
  - Designing good experiences - As coding gets easier, it becomes more important and easier to build user-friendly experiences
Without this knowledge, apps built purely through AI prompting will likely be sub-optimal, slow, and hard to maintain. This is an opportunity for us to sharpen the skills and a call to action to adapt to the new reality.

[0] https://en.wikipedia.org/wiki/Vibe_coding

[1] https://my-pomodoro-flow.netlify.app/

[2] https://github.com/annjose/pomodoro-flow

EVa5I7bHFq9mnYK|11 months ago

I'd say PHP and JS made developers dumb. And this is the kind of "developers" that AI is currently replacing.

Gualdrapo|11 months ago

I don't need AI to be dumb.

gtsop|11 months ago

My guess is that AI will make programming even more misserable for those who entered the field for the wrong reasons. Now is the time to double down on learning the basics, the low level, the under-the-hood stuff.

MrMcCall|11 months ago

"Pay a lot, cry once." --Chinese Proverb