top | item 45405383

(no title)

reb | 5 months ago

I would love to see an anti-AI take that doesn't hinge on the idea that technology forces people to be lazy/careless/thoughtless.

The plan-build-test-reflect loop is equally important when using an LLM to generate code, as anyone who's seriously used the tech knows: if you yolo your way through a build without thought, it will collapse in on itself quickly. But if you DO apply that loop, you get to spend much more time on the part I personally enjoy, architecting the build and testing the resultant experience.

> While the LLMs get to blast through all the fun, easy work at lightning speed, we are then left with all the thankless tasks

This is, to me, the root of one disagreement I see playing out in every industry where AI has achieved any level of mastery. There's a divide between people who enjoy the physical experience of the work and people who enjoy the mental experience of the work. If the thinking bit is your favorite part, AI allows you to spend nearly all of your time there if you wish, from concept through troubleshooting. But if you like the doing, the typing, fiddling with knobs and configs, etc etc, all AI does is take the good part away.

discuss

order

PessimalDecimal|5 months ago

> I would love to see an anti-AI take that doesn't hinge on the idea that technology forces people to be lazy/careless/thoughtless.

The article sort of goes sideways with this idea but pointing out that AI coding robs you a deep understanding of the code it produces is a valid and important criticism of AI coding.

A software engineer's primary job isn't producing code, but producing a functional software system. Most important to that is the extremely hard to convey "mental model" of how the code works and an expertise of the domain it works in. Code is a derived asset of this mental model. And you will never know code as well as a reader and you would have as the author for anything larger than a very small project.

There are other consequences of not building this mental model of a piece of software. Reasoning at the level of syntax is proving to have limits that LLM-based coding agents are having trouble scaling beyond.

danpat|5 months ago

> And you will never know code as well as a reader and you would have as the author for anything larger than a very small project.

This feels very true - but also consider how much code exists for which many of the current maintainers were not involved in the original writing.

There are many anecdotal rules out there about how much time is spent reading code vs writing. If you consider the industry as a whole, it seems to me that the introduction of generative code-writing tools is actually not moving the needle as far as people are claiming.

We _already_ live in a world where most of us spend much of our time reading and trying to comprehend code written by others from the past.

What's the difference between a messy codebase created by a genAI, and a messy codebase where all the original authors of the code have moved on and aren't available to ask questions?

mattlutze|5 months ago

> The article sort of goes sideways with this idea but pointing out that AI coding robs you a deep understanding of the code it produces is a valid and important criticism of AI coding.

In any of my teams with moderate to significant code bases, we've always had to lean very hard into code comments and documentation, because a developer will forget in a few months the fine details of what they've previously built. And further, any org with turnover needs to have someone new come in and be able to understand what's there.

I don't think I've met a developer that keeps all of the architecture and design deeply in their mind at all times. We all often enough need to go walk back through and rediscover what we have.

Which is to say... if the LLM generator was instead a colleague or neighboring team, you'd still need to keep up with them. If you can adapt those habits to the generative code then it doesn't seem to be a bit leap.

jstummbillig|5 months ago

> The article sort of goes sideways with this idea but pointing out that AI coding robs you a deep understanding of the code it produces is a valid and important criticism of AI coding.

Why? Code has always been the artifact. Thinking about and understanding the domain clearly and solving problems is where the intrinsic value is at (but I'd suspect that in the future this, too, will go away).

noosphr|5 months ago

>The article sort of goes sideways with this idea but pointing out that AI coding robs you a deep understanding of the code it produces is a valid and important criticism of AI coding.

You can describe what the code should do with natural language.

I've found that using literate programming with agent calls to write the tests first, then code, then the human refining the description of the code, and going back to 1 is surprisingly good at this. One of these days I'll get around to writing an emacs mode to automate it because right now it's yanking and killing between nearly a dozen windows.

Of course this is much slower than regular development but you end up with world class documentation and understanding of the code base.

jay_kyburz|5 months ago

I can imagine an industry where we describe business rules to apply to data in natural language, and the AI simply provides an executable without source at all.

The role of the programmer would then be to test if the rules are being applied correctly. If not, there are no bugs to fix, you simply clarify the business rules and ask for a new program.

I like to imagine what it must be like for a non technical business owner who employees programmers today. There is a meeting where a process or outcome is described, and a few weeks / months / years a program is delivered. The only way to know if it does what was requested is to poke it a bit and see if it works. The business owner has no metal modal of the code and can't go in and fix bugs.

update: I'm not suggesting I believe AI is anywhere near being this capable.

KoolKat23|5 months ago

Not really, its more a case of "potentially can" rather than "will". This dynamic has always been there with the whole junior, senior dev. split, its not a new problem. You 100% can use it without losing this, in an ideal world you can even go so far as to not worry about the understanding for parts that are inconsequential.

enraged_camel|5 months ago

>> The article sort of goes sideways with this idea but pointing out that AI coding robs you a deep understanding of the code it produces is a valid and important criticism of AI coding.

All code is temporary and should be treated as ephemeral. Even if it lives for a long time, at the end of the day what really matters is data. Data is what helps you develop the type of deep understanding and expertise of the domain that is needed to produce high quality software.

In most problem domains, if you understand the data and how it is modeled, the need to be on top of how every single line of code works and the nitty-gritty of how things are wired together largely disappears. This is the thought behind the idiom “Don’t tell me what the code says—show me the data, and I’ll tell you what the code does.”

It is therefore crucial to start every AI-driven development effort with data modeling, and have lots of long conversations with AI to make sure you learn the domain well and have all your questions answered. In most cases, the rest is mostly just busywork, and handing it off to AI is how people achieve the type of productivity gains you read about.

Of course, that's not to say you should blindly accept everything the AI generates. Reading the code and asking the AI questions is still important. But the idea that the only way to develop an understanding of the problem is to write the code yourself is no longer true. In fact, it was never true to begin with.

posix86|5 months ago

What is "understanding code", mental model of the problem? These are terms for which we all have developed a strong & clear picture of what they mean. But may I remind us all that used to not be the case before we entered this industry - we developed it over time. And we developed it based on a variety of highly interconnected factors, some of which are e.g.: what is a program, what is a programming language, what languages are there, what is a computer, what software is there, what editors are there, what problems are there.

And as we mapped put this landscape, hadn't there been countless situations where things felt dumb and annoying, and then situation in sometimes they became useful, and sometimes they remained dumb? Something you thought is making you actively loosing brain cells as you're doing them, because you're doing them wrong?

Or are you to claim that every hurdle you cross, every roadblock you encounter, every annoyance you overcome has pedagogical value to your career? There are so many dumb things out there. And what's more, there's so many things that appear dumb at first and then, when used right, become very powerful. AI is that: Something that you can use to shoot yourself in the foot, if used wrong, but if used right, it can be incredibly powerful. Just like C++, Linux, CORS, npm, tcp, whatever, everything basically.

halfadot|5 months ago

> The article sort of goes sideways with this idea but pointing out that AI coding robs you a deep understanding of the code it produces is a valid and important criticism of AI coding.

No it isn't. There's literally nothing about the process that forces you to skip understanding. Any such skips are purely due to the lack of will on the developer's side. This lack of will to learn will not change the outcomes for you regardless of whether you're using an LLM. You can spend as much time as you want asking the LLM for in-depth explanations and examples to test your understanding.

So many of the criticisms of coding with LLMs I've seen really do sound like they're coming from people who already started with a pre-existing bias, fiddled with with for a short bit (or worse, never actually tried it at all) and assumed their limited experience is the be-all end-all of the subject. Either that, or they're typical skill issues.

weego|5 months ago

Who are this endless cohort of develops who need to maintain a 'deep understanding' of their code. I'd argue a high % of all code written globally on any given day that is not some flavour of boilerplate, while written with good intention, is ultimately just short-lived engineering detritus of it even gets a code review to pass.

_fat_santa|5 months ago

> I would love to see an anti-AI take that doesn't hinge on the idea that technology forces people to be lazy/careless/thoughtless.

Here's mine, I use Cline occasionally to help me code but more and more I find myself just coding by hand. The reason is pretty simple which is with these AI tools you for the most part replace writing code with writing a prompt.

I look at it like this, if writing the prompt, and the inference time is less than what it would take me to write the code by hand I usually go the AI route. But this is usually for refactoring tasks where I consider the main bottleneck to be the speed at which my fingers can type.

For virtually all other problems it goes something like this: I can do X task in 10 minutes if i code it manually or I can prompt AI to do it and by the time I finish crafting the prompt and execute, it takes me about 8 minutes. Yes that's a savings of 2 minutes on that task and that's all fine and good assuming that the AI didn't make a mistake, if I have to go back and re-prompt or manually fix something, then all of a sudden the time it took me to complete that task is now 10-12 minutes with AI. Here the best case scenario is I just spent some AI credits for zero time savings and worse case is I spent AI credits AND the task was slower in the end.

With all sorts of tasks I now find myself making this calculation and for the most part, I find that doing it by hand is just the "safer" option, both in terms of code output but also in terms of time spent on the task.

didibus|5 months ago

> The reason is pretty simple which is with these AI tools you for the most part replace writing code with writing a prompt

I'm convinced I spend more time typing and end up typing more letters and words when AI coding than when not.

My hands are hurting me more from the extra typing I have to do now lol.

I'm actually annoyed they haven't integrated their voice to text models inside their coding agents yet.

rapind|5 months ago

I find myself often writing pseudo code (CLI) to express some ideas to the agent. Code can be a very powerful and expressive means of communication. You don't have to stop using it when it's the best / easiest tool for a specific case.

That being said, these agents may still just YOLO and ignore your instructions on occasion, which can be a time suck, so sometimes I still get my hands dirty too :)

bccdee|5 months ago

> the idea that technology forces people to be careless

I don't think anyone's saying that about technology in general. Many safety-oriented technologies force people to be more careful, not less. The argument is that this technology leads people to be careless.

Personally, my concerns don't have much to do with "the part of coding I enjoy." I enjoy architecture more than rote typing, and if I had a direct way to impose my intent upon code, I'd use it. The trouble is that chatbot interfaces are an indirect and imperfect vector for intent, and when I've used them for high-level code construction, I find my line-by-line understanding of the code quickly slips away from the mental model I'm working with, leaving me with unstable foundations.

I could slow down and review it line-by-line, picking all the nits, but that moves against the grain of the tool. The giddy "10x" feeling of AI-assisted coding encourages slippage between granular implementation and high-level understanding. In fact, thinking less about the concrete elements of your implementation is the whole advantage touted by advocates of chatbot coding workflows. But this gap in understanding causes problems down the line.

Good automation behaves in extremely consistent and predictable ways, such that we only need to understand the high-level invariants before focusing our attention elsewhere. With good automation, safety and correctness are the path of least resistance.

Chatbot codegen draws your attention away without providing those guarantees, demanding best practices that encourage manually checking everything. Safety and correctness are the path of most resistance.

godelski|5 months ago

(Adding to your comment, not disagreeing)

  > The argument is that this technology leads people to be careless.
And this will always be a result of human preference optimization. There's a simple fact: humans prefer lies that they don't know are lies over lies that they do know are lies.

We can't optimize for an objective truth when that objective truth doesn't exist. So while doing our best to align our models they must simultaneously optimize they ability to deceive us. There's little to no training in that loop where outputs are deeply scrutinized, because we can't scale that type of evaluation. We end up rewarding models that are incorrect in their output.

We don't optimize for correctness, we optimize for the appearance of correctness. We can't confuse the two.

The result is: when LLMs make errors, those errors are difficult for humans you detect.

This results in a fundamentally dangerous tool, does it not? Tools that when they error or fail they do so safely and loudly. Instead this one fails silently. That doesn't mean you shouldn't use the tool but that you need to do so with an abundance of caution.

  > I could slow down and review it line-by-line, picking all the nits, but that moves against the grain of the tool.
Actually the big problem I have with coding with LLMs is that it increases my cognitive load, not decreases it. Bring over worked results in carelessness. Who among us does not make more mistakes when they are tired or hungry?

That's the opposite of lazy, so hopefully answers OP.

lelanthran|5 months ago

> If the thinking bit is your favorite part, AI allows you to spend nearly all of your time there if you wish, from concept through troubleshooting.

This argument is wearing a little thin at this point. I see it multiples times a day, rephrased a little bit.

The response, "How well do you think your thinking will go if you had not spent years doing the 'practice' part?", is always followed by either silence or a non-sequitor.

So, sure, keep focusing on the 'thinking' part, but your thinking will get more and more shallow without sufficient 'doing'

t0mas88|5 months ago

Separate from AI, as your role becomes more tech lead / team lead / architect you're also not really "doing" as much and still get involved in a lot of thinking by helping people get unstuck. The thinking part still builds experience. You don't need to type the code to have a good understanding of how to approach problems and how to architect systems. You just need to be making those decisions and gaining experience from them.

kristianbrigman|5 months ago

It's about as much time as I think about caching artifacts and branch mispredict latencies. Things I cared a lot about when I was doing assembly, but don't even think about really in Python (or C++).

My assembly has definitely rotted and I doubt I could do it again without some refreshing but it's been replaced with other higher-level skills, some which are general like using correct data structures and algorithms, and others that are more specific like knowing some pandas magic and React Flow basics.

I expect this iteration I'll get a lot better at systems design, UML, algorithm development, and other things that are slightly higher level. And probably reverse-engineering as well :) The computer engineering space is still vast IMHO....

johnfn|5 months ago

Do you think that all managers and tech leads atrophy because they don’t spend all day “doing”? I think a good number of them become more effective because they delegate the simple parts of their work that don’t require deep thought, leaving them to continue to think hard about the thorniest areas of what they’re working on.

Or perhaps you’re asking how people will become good at delegation without doing? I don’t know — have you been “doing” multiple years of assembly? If not, how are you any good at Python (or whatever language you currently use?). Probably you’d say you don’t need to think about assembly because it has been abstracted away from you. I think AI operates similarly by changing the level of abstraction you can think at.

jayd16|5 months ago

My take is just that debugging is harder than writing so I'd rather just write it instead of debugging code I didn't write.

rwmj|5 months ago

I think it's more like code review, which really is the worst part of coding. With AI, I'll be doing less of the fun bits (writing, debugging those super hard customer bugs), and much much more code review.

erichocean|5 months ago

Are people really not using LLMs to debug code?

sciencejerk|5 months ago

@shredprez the website in your bio appears to sell AI-driven products: "Design anything in Claude, Cursor, or VS Code

Consider leaving a disclaimer next time. Seems like you have a vested interest in the current half-baked generation of AI products succeeding

moffkalast|5 months ago

Conflict of interest or not, he's not really wrong. Anyone shipping code in a professional setting doesn't just push to prod after 5 people say LGTM to their vibe coded PR, as much as we like to joke around with it. There are stages of tests and people are responsible for what they submit.

As someone writing lots of research code, I do get caught being careless on occasion since none of it needs to work beyond a proof of concept, but overall being able to just write out a spec and test an idea out in minutes instead of hours or days has probably made a lot of things exist that I'd otherwise never be arsed to bother with. LLMs have improved enough in the past year that I can easily 0-shot lots of ad-hoc visualization stuff or adapters or simple simulations, filters, etc. that work on the first try and with probably fewer bugs than I'd include in the first version myself. Saves me actual days and probably a carpal tunnel operation in the future.

closeparen|5 months ago

It's "anti-AI" from the perspective of an investor or engineering manager who assumes that 10x coding speed should 10x productivity in their organization. As a staff IC, I find it a realistic take on where AI actually sits in my workflow and how it relates to juniors.

visarga|5 months ago

> assumes that 10x coding speed should 10x productivity

This same error in thinking happens in relation to AI agents too. Even if the agent is perfect (not really possible) but other links in the chain are slower, the overall speed of the loop still does not increase. To increase productivity with AI you need to think of the complete loop, reorganize and optimize every link in the chain. In other words a business has to redesign itself for AI, not just apply AI on top.

Same is true for coding with AI, you can't just do your old style manual coding but with AI, you need a new style of work. Maybe you start with constraint design, requirements, tests, and then you let the agent loose and not check the code, you need to automate that part, it needs comprehensive automated testing. The LLM is like a blind force, you need to channel it to make it useful. LLM+Constraints == accountable LLM, but LLM without constraints == unaccountable.

overfeed|5 months ago

> AI allows you to spend nearly all of your time there if you wish, from concept through troubleshooting

It does not! If you're using interactive IDE AI, you spend your time keeping the AI on the rails, and reminding it what the original task is. If you're using agents, then you're delegating all the the mid-level/tactical thinking, and perhaps even the planning, and you're left with the task of writing requirements granular enough for an intern to tackle, but this hews closer to "Business Analyst" than "Software Engineer"

Marha01|5 months ago

From my experience, current AI models stay on the rails pretty well. I don't need to remind them of the task at hand.

malyk|5 months ago

Using an agentic workflow does not require you to delegate tge thinking. Agents are great at taking exactly what you want to do and executing. So spend an extra few minutes and lay out the architecture YOU want then let the ai do the work.

HiPhish|5 months ago

> I would love to see an anti-AI take that doesn't hinge on the idea that technology forces people to be lazy/careless/thoughtless.

I think this might simply be how the human brain works. Take autonomous driving as an example: while the car drives on its own the human driver is supposed to be alert and step in if needed. But does that work? Or will the driver's mind wander off because the car has been driving properly for the last half hour? My gut feeling is that it's inevitable that we'll eventually just shut out everything that goes smoothly and by the time it doesn't it might be too late.

We are not that different from our ancestors who used to roam the forests, trying to eat before they get eaten. In such an environment there is constantly something going on, some critters crawling, some leaves rustling, some water flowing. It would drive us crazy if we could not shut out all this regular noise. It's only when an irregularity appears that our attention must spring into action. When the leaves rustle differently than they are supposed to there is a good chance that there is some prey or a predator to be found. This mechanism only works if we are alert. The sounds of the forest are never exactly the same, so there is constant stimulation to keep up on our toes. But if you are relaxing in your shelter the tension is gone.

My fear is that AI is too good, to the point where it makes us feel like being in our shelter rather than in the forest.

halfcat|5 months ago

> My gut feeling is that it's inevitable that we'll eventually just shut out everything that goes smoothly and by the time it doesn't it might be too late.

Yes. Productivity accelerates at an exponential rate, right up until it drives off a cliff (figuratively or literally).

rhetocj23|5 months ago

Ah yes, finally someone who gets it. You're a smart fella.

I view the story of LLMs akin to the Concorde. Something catastrophic will happen that will be too big to ignore and all trust will implode.

didibus|5 months ago

> If the thinking bit is your favorite part, AI allows you to spend nearly all of your time there if you wish, from concept through troubleshooting

I think this depends. I prefer the thinking bit, but it's quite difficult to think without the act of coding.

It's how white boarding or writing can help you think. Being in the code helps me think, allows me to experiment, uncover new learnings, and evolve my thinking in the process.

Though maybe we're talking about thinking of different things? Are you thinking in the sense of what a PM thinks about ? User features, user behavior, user edge cases, user metrics? Or do you mean thinking about what a developer thinks about, code clarity, code performance, code security, code modularization and ability to evolve, code testability, innovative algorithms, innovative data-structure, etc. ?

nativeit|5 months ago

I’m struggling to understand how they are asserting one follows from the other. I’m not a SWE, but do a lot of adjacent types of work (infrastructure automation and scripting, but also electronics engineering, and I’m also a musician), and the “thinking” part where I get to deploy logic and reasoning to solve novel challenges is certainly a common feature among these activities I certainly enjoy, and I feel it’s a core component of what I’m doing.

But the result of that thinking would hardly ever align neatly with whatever an LLM is doing. The only time it wouldn’t be working against me would be drafting boilerplate and scaffolding project repos, which I could already automate with more prosaic (and infinitely more efficient) solutions.

Even if it gets most of what I had in mind correct, the context switching between “creative thinking” and “corrective thinking” would be ruinous to my workflow.

I think the best case scenario in this industry will be workers getting empowered to use the tools that they feel work best for their approach, but the current mindset that AI is going to replace entire positions, and that individual devs should be 10x-ing their productivity is both short-sighted and counterproductive in my opinion.

strogonoff|5 months ago

I never made a case against LLMs and similar ML applications in the sense that they negatively impact mental agility. The cases I made so far include, but are not limited to:

— OSS exploded on the promise that software you voluntarily contributed to remains to benefit the public, and that a large corporation cannot tomorrow simply take your work and make it part of their product, never contributing anything back. Commercially operated LLMs threaten OSS both by laundering code and by overwhelming maintainers with massive, automatically produced and sometimes never read by a human patches and merge requests.

— Being able to claim that any creative work is merely a product of an LLM (which is a reality now for any new artist, copywriter, etc.) removes a large motivator for humans to do fully original creative work and is detrimental to creativity and innovation.

— The ends don’t justify the means, as a general philosophical argument. Large-scale IP theft had been instrumental at the beginning of this new wave of applied ML—and it is essentially piracy, except done by the powerful and wealthy against the rest of us, and for profit rather than entertainment. (They certainly had the money to license swaths of original works for training, yet they chose to scrape and abuse the legal ambiguity due to requisite laws not yet existing.)

— The plain old practical “it will drive more and more people out of jobs”.

— Getting everybody used to the idea that LLMs now mediate access to information increases inequality (making those in control of this tech and their investors richer and more influential, while pushing the rest—most of whom are victims of the aforementioned reverse piracy—down the wealth scale and often out of jobs) more than it levels the playing field.

— Diluting what humanity is. Behaving like a human is how we manifest our humanness to others, and how we deserve humane treatment from them; after entities that walk and talk exactly like a human would, yet which we can be completely inhumane to, become commonplace, I expect over time this treatment will carry over to how humans treat each other—the differentiator has been eliminated.

— It is becoming infeasible to operate open online communities due to bot traffic that now dwarves human traffic. (Like much of the above, this is not a point against LLMs as technology, but rather the way they have been trained and operated by large corporate/national entities—if an ordinary person wanted to self-host their own, they would simply not have the technical capability to cause disruption at this scale.)

This is just what I could recall off the top of my head.

m0rde|5 months ago

Good points here, particularly the ends not justifying the means.

I'm curious for more thoughts on "will drive more and more people out of jobs”. Isn't this the same for most advances in technology (e.g., steam engine, computers s, automated toll plazas, etc.). In some ways, it's motivation for making progress; you get rid of mundane jobs. The dream is that you free those people to do something more meaningful, but I'm not going to be that blindly optimistic :) still, I feel like "it's going to take jobs" is the weakest of arguments here.

benoau|5 months ago

> — The ends don’t justify the means. IP theft that lies in the beginning of this new wave of applied ML is essentially piracy

Isn't "AI coding" trained almost entirely on open source code and published documentation?

LittleCloud|5 months ago

> There's a divide between people who enjoy the physical experience of the work and people who enjoy the mental experience of the work. If the thinking bit is your favorite part, AI allows you to spend nearly all of your time there if you wish, from concept through troubleshooting. But if you like the doing, the typing, fiddling with knobs and configs, etc etc, all AI does is take the good part away.

I don't know... that seems like a false dichotomy to me. I think I could enjoy both but it depends on what kind of work. I did start using AI for one project recently: I do most of the thinking and planning, and for things that are enjoyable to implement I still write the majority of the code.

But for tests, build system integration, ...? Well that's usually very repetitive, low-entropy code that we've all seen a thousand times before. Usually not intellectually interesting, so why not outsource that to the AI.

And even for the planning part of a project there can be a lot of grunt work too. Haven't you had the frustrating experience of attempting a re-factoring and finding out midway it doesn't work because of some edge case. Sometimes the edge case is interesting and points to some deeper issue in the design, but sometimes not. Either way it sure would be nice to get a hint beforehand. Although in my experience AIs aren't at a stage to reason about such issues upfront --- no surprise since it's difficult for humans too --- of course it helps if your software has an oracle for if the attempted changes are correct, i.e. it is statically-typed and/or has thorough tests.

bluefirebrand|5 months ago

> Usually not intellectually interesting, so why not outsource that to the AI.

Because it still needs to be correct, and AI still is not producing correct code

mhitza|5 months ago

I agree with your comment sentiment, but I believe that you, like many others have the cycle in wrong order. I don't fault anyone for it because it's the flow that got handed down to us from the days of waterfall development.

My strong belief after almost twenty years of professional software development is that both us and LLMs should be following the order: build, test, reflect, plan, build.

Writing out the implementation is the process of materializing the requirements, and learning the domain. Once the first version is out, you can understand the limits and boundaries of the problem and then you can plan the production system.

This is very much in line with Fred Brooks' "build one to throw away" (written ~40 years ago in the "The Mythical Man-Month". While often quoted, if you never read his book, I urge you to do so, it's both entertaining, and enlightening on our software industry), startup culture (if you remove the "move fast break things" mantra), and governmental pilot programs (the original "minimum viable").

bgwalter|5 months ago

"AI" does not encourage real thinking. "AI" encourages hand waving grand plans that don't work, CEO style. All pro-"AI" posts focus on procedures and methodologies, which is just LARPing thinking.

Using "AI" is just like speed reading a math book without ever doing single exercise. The proponents rarely have any serious public code bases.

rhetocj23|5 months ago

Exactly.

And this should not be a surprise at all. Humans are optimisers of truth NOT maximisers. There is a subtle and nuanced difference. Very few actually spend their entire existence being maximizers, its pretty exhausting to be of that kind.

Optimising = we look for what feels right or surpassses some threshold of "looks about right". Maximizing = we think deeply and logically reason to what is right and conduct tests to ensure it is so.

Now if you have the discipline to choose when to shift between the two modes this can work. Most people do not though. And therein lies the danger.

cgh|5 months ago

A surprising conclusion to me at least is that a lot of programmers simply don’t like to write code.

belter|5 months ago

> "AI" encourages hand waving grand plans that don't work

You described the current AI Bubble.

AnotherGoodName|5 months ago

I see a lot of comments like this and it reflects strongly negatively on the engineers who write it imho. As in I've been a staff level engineer at both Meta and Google and a lead at various startups in my time. I post open source projects here on HN from time to time that are appreciated. I know my shit. If someone tells me that LLMs aren't useful i think to myself "wow this person is so unable to learn new tools they can't find value in one of the biggest changes happening today".

That's not to say that LLMs as good as some of the more outrageous claims. You do still need to do a lot of work to implement code. But if you're not finding value at all it honestly reflects badly on you and your ability to use tools.

The craziest thing is i see the above type of comment on linked in regularly. Which is jaw dropping. Prospective hiring managers will read it and think "Wow you think advertising a lack of knowledge is helpful to your career?" Big tech co's are literally firing people with attitudes like the above. There's no room for people who refuse to adapt.

I put absolute LLM negativity right up there with comments like "i never use a debugger and just use printf statements". To me it just screams you never learnt the tool.

abustamam|5 months ago

> The plan-build-test-reflect loop is equally important when using an LLM to generate code, as anyone who's seriously used the tech knows

Yeah I'm actually quite surprised that so many people are just telling AI to do X without actually developing a maintainable plan to do so first. It's no wonder that so many people are anti-vibe-coding — it's because their exposure to vibe coding is just telling Replit or Claude Code to do X.

I still do most of my development in my head, but I have a go-to prompt I ask Claude code when I'm stuck: "without writing any code, and maintaining existing patterns, tell me how to do X." it'd spit out some stuff, I'd converse with it to make sure it is a feasible solution that would work long term, then I tell it to execute the plan. But the process still starts in my head, not with a prompt.

elicash|5 months ago

My approach has been to "yolo" my way through the first time, yes in a somewhat lazy and careless manner, get a working version, and then build a second time more thoughtfully.

stein1946|5 months ago

> in every industry where AI has achieved any level of mastery.

Which industries are those? What does that mastery look like?

> There's a divide between people ...

No, there is not. If one is not willing to figure out a couple of ffmpeg flags, comb through k8s controller code to see what is possible and fix that booting error in their VMs then failure in "mental experiences" is certain.

The most successful people I have met in this profession are the ones who absolutely do not tolerate magic and need to know what happens from the moment they press the ON on their machine, till the moment they turn is OFF again.

jmull|5 months ago

> There's a divide between people who enjoy the physical experience of the work and people who enjoy the mental experience of the work.

Pretty clearly that’s not the divide anyone’s talking about, right?

Your argument should maybe be something about thinking about the details vs thinking about the higher level. (If you were to make that argument, my response would be: both are valuable and important. You can only go so far working at one level. There are certainly problems that can be solved at one level, but also ones that can’t.)

wat10000|5 months ago

I suspect the root of the disagreement is more about what kinds of work people do. There are many different kinds of programming and you can’t lump them all together. We shouldn’t expect an AI tool to be a good fit for all of them, any more than we should expect Ruby to be a good fit for embedded development or C to be a good fit for web apps.

My experience with low level systems programming is that it’s like working with a developer who is tremendously enthusiastic but has little skill and little understanding of what they do or don’t understand. Time I would have spent writing code is replaced by time spent picking through code that looks superficially good but is often missing key concepts. That may count as “thinking” but I wouldn’t categorize it as the good kind.

Where it excels for me is as a superpowered search (asking it to find places where we play a particular bit-packing game with a particular type of pointer works great and saves a lot of time) and for writing one-off helper scripts. I haven’t found it useful for writing code I’m going to ship, but for stuff that won’t ship it can be a big help.

It’s kind of like an excavator. If you need to move a bunch of dirt from A to B then it’s great. If you need to move a small amount of dirt around buried power lines and water mains, it’s going to cause more trouble than it’s worth.

Balinares|5 months ago

I think this is one of the most cogent takes on the topic that I've seen. Thanks for the good read!

It's also been my experience that AI will speed up the easy / menial stuff. But that's just not the stuff that takes up most of my time in the first place.

chamomeal|5 months ago

Idk I feel like even without using LLMs the job is 90% thinking and planning. And it’s nice to go the last 10% on your own to have a chance to reflect and challenge your earlier assumptions.

I actually end up using LLMs in the planning phase more often than the writing phase. Cursor is super good at finding relevant bits of code in unfamiliar projects, showing me what kind of conventions and libraries are being used, etc.

ChrisMarshallNY|5 months ago

It's like folks complaining that people don't know how to code in Assembly or Machine Language.

New-fangled compiled languages...

Or who use modern, strictly-typed languages.

New-fangled type-safe languages...

As someone that has been coding since it was wiring up NAND gates on a circuit board, I'm all for the new ways, but there will definitely be a lot of mistakes, jargon, and blind alleys; just like every other big advancement.

martin-t|5 months ago

The last paragraph feels more wrong the more I think about it.

Imagine an AI as smart as some of the smartest humans, able to do everything they intellectually do but much faster, cheaper, 24/7 and in parallel.

Why would you spend any time thinking? All you'll be doing it is the things an AI can't do - 1) feeding it input from the real world and 2) trying out its output in the real world.

1) Could be finding customers, asking them to describe their problem, arranging meetings, driving to the customer's factory to measure stuff and take photos for the AI, etc.

2) Could be assembling the prototype, soldering, driving it to the customer's factory, signing off the invoice, etc.

None of that is what I as a programmer / engineer enjoy.

If actual human-level AI arrives, it'll do everything from concept to troubleshooting, except the parts where it needs presence in the physical world and human dexterity.

If actual human-level AI arrives, we'll become interfaces.

gspr|5 months ago

For me it's simply this: the best thing about computers and programming is that they do exactly what the code I write says they'll do. That is a quality that humans and human/natural languages don't have. To me, LLMs feel like replacing the best property of computers with a (in this context) terrible property of humans.

Why would I want a fuzzy, vague, imprecise, up-to-interpretation programming language? I already have to struggle with that in documentation, specifications, peers and – of course – myself. Why would I take the one precise component and make it suffer from the same?

This contrasts of course with tasks such as search, where I'm not quite able to precisely express what I want. Here I find LLMs to be a fantastic advance. Same for e.g. operations between imprecise domains, like between natural languages.

jaredklewis|5 months ago

> There's a divide between people who enjoy the physical experience of the work and people who enjoy the mental experience of the work.

Does this divide between "physical" and "mental" exist? Programming languages are formal languages that allow you to precisely and unambiguously express your ideas. I would say that "fiddling" with the code (as you say) is a kind of mental activity.

If there is actually someone out there that only dislikes AI coding assistants because they enjoy the physical act of typing and now have to do less of it (I have not seen this blog post yet), then I might understand your point.

latexr|5 months ago

> I would love to see an anti-AI take that doesn't hinge on the idea that technology forces people to be lazy/careless/thoughtless.

Are you genuinely saying you never saw a critique of AI on environmental impact, or how it amplifies biases, or how it widens the economic gap, or how it further concentrates power in the hands of a few, or how it facilitates the dispersion of misinformation and surveillance, directly helping despots erode civil liberties? Or, or, or…

You don’t have to agree with any of those. You don’t even have to understand them. But to imply anti-AI arguments “hinge on the idea that technology forces people to be lazy/careless/thoughtless” is at best misinformed.

Go grab whatever your favourite LLM is and type “critiques of AI”. You’ll get your takes.

jayd16|5 months ago

I'm not an AI zealot but I think some of these are over blown.

The energy cost is nonsensical unless you pin down a value out vs value in ratio and some would argue the output is highly valuable and the input cost is priced in.

I don't know if it will end up being a concentrated power. It seems like local/open LLMs will still be in the same ballpark. Despite the absurd amounts of money spent so far the moats don't seem that deep.

Baking in bias is a huge problem.

The genie is out of the bottle as far as people using it for bad. Your own usage won't change that.

kiitos|5 months ago

> If the thinking bit is your favorite part, AI allows you to spend nearly all of your time there if you wish, from concept through troubleshooting...

What about if the "knowing/understanding" bit is your favorite part?

swiftcoder|5 months ago

> I would love to see an anti-AI take that doesn't hinge on the idea that technology forces people to be lazy/careless/thoughtless.

What makes you regard this as an anti-AI take? To my mind, this is a very pro-AI take

analog8374|5 months ago

Here's one

AI can only recycle the past.

grim_io|5 months ago

Most of us do nothing but remix the past solutions.

Since we don't know what else might already exist in the world without digging very deep, we fool ourselves into thinking that we do something very original and unique.

Vegenoid|5 months ago

I'm not sure if you are insinuating that the article is an anti-AI take, but in case it wasn't clear, it's not. It is about doing just what you suggested:

> Just as tech leads don't just write code but set practices for the team, engineers now need to set practices for AI agents. That means bringing AI into every stage of the lifecycle

The technology doesn't force people to be careless, but it does make it very easy to be careless, without having to pay the costs of that carelessness until later.

layer8|5 months ago

My experience is that you need the “physical” coding work to get a good intuition of the mechanics of software design, the trade-offs and pitfalls, the general design landscape, and so on. I disagree that you can cleanly separate the “mental” portion of the work. Iterating on code builds your mental models, in a way that merely reviewing code does not, or only to a much more superficial degree.

pluto_modadic|5 months ago

it's mostly seeing juniors and project managers writing garbage that creates a massive pile of BS for us to clean up that pisses us off.

resonious|5 months ago

I actually didn't really interpret this as anti-AI. In the end it was pretty positive about AI and I pretty much agree with the conclusion.

Though I will also dogpile on the "thankless tasks" remark and say that the stuff that I have AI blast through is very thankless. I do not enjoy changing 20 different files to account for a change in struct definition.

raincole|5 months ago

The first two paragraphs are so confusing. Since Claude Code became a thing my "thinking" phase has been much, much longer than before.

I honestly don't know how one can use Claude Code (or other AI agents) in a 'coding first thinking later' manner.

pg3uk|5 months ago

What you've described there is the difference between a good developer and a bad one.

A dev that spends an undue amount of time fiddling with knobs and configs probably sucks. Their mind isn't on the problem that needs to be solved.

croes|5 months ago

>I would love to see an anti-AI take that doesn't hinge on the idea that technology forces people to be lazy/careless/thoughtless.

It’s not force but simply human nature. We invent tools to do less. That’s the whole point of tools.

giantg2|5 months ago

"I would love to see an anti-AI take that doesn't hinge on the idea that technology forces people to be lazy/careless/thoughtless."

I'm not impressed by AI because it generates slop. Copilot can't write a thorough working test suite to save it's life. I think we need a design and test paradigm to properly communicate with AI for it to build great software.

benterix|5 months ago

> I would love to see an anti-AI take that doesn't hinge on the idea that technology forces people to be lazy/careless/thoughtless.

Not forces, encourages.

nimithryn|5 months ago

I think that the problem is, at the end of the day, the engineer must specify exactly what they want the program to do.

You can do this in Python, or you can do this in English. But at the end of the day the engineer must input the same information to get the same behavior. Maybe LLMs make this a bit more efficient but even in English it is extremely hard to give exact specification without ambiguity (maybe even harder than Python in some cases).

EGreg|5 months ago

Most of my anti-AI takes are either:

1) Bad actors using AI at scale to do bad things

2) AI just commodifying everything and making humans into zoo animals

specproc|5 months ago

My anti AI take is that it's no fun.

I'm on a small personal project with it intentionally off, and I honestly feel I'm moving through it faster and certainly having a better time. I also have a much better feel for the code.

These are all just vibes, in the parlance of our times, but it's making me question why I'm bothering with LLM assisted coding.

Velocity is rarely the thing in my niche, and I'm not convinced babysitting an agent is all in all faster. It's certainly a lot less enjoyable, and that matters, right?

add-sub-mul-div|5 months ago

More specifically for (1), the combined set of predators, advertisers, businesses, and lazy people using it to either prey or enshittify or cheat will make up the vast majority of use cases.

_heimdall|5 months ago

Read Eliezer Yudkowsky. He raises plenty of anti-AI arguments, none of them have to do with laziness.

otabdeveloper4|5 months ago

> technology forces people to be lazy/careless/thoughtless

AI isn't a technology. (No more than asking your classmate to do your homework for you is a "technology".)

Please don't conflate between AI and programming tools. AI isn't a tool, it is an oracle. There's a huge fundamental gap here that cannot be bridged.

solumunus|5 months ago

It’s crazy to me that some people love the pressing keys parts so much.

haskellshill|5 months ago

Funny that you imagine AI-coders doing any sort of thinking

agentcoops|5 months ago

Completely agreed. Whether it be AI or otherwise, I consider anything that gives me more time to focus on figuring out the right problem to solve or iterating on possible solutions to be good.

Yet every time that someone here earnestly testifies to whatever slight but real use they’ve found of AI, an army of commentators appears ready to gaslight them into doubting themselves, always citing that study meant to have proven that any apparent usefulness of AI is an illusion.

At this point, even just considering the domain of programming, there’s more than enough testimony to the contrary. This doesn’t say anything about whether there’s an AI bubble or overhype or anything about its social function or future. But, as you note, it means these cardboard cutout critiques of AI need to at least start from where we are.

blehn|5 months ago

> There's a divide between people who enjoy the physical experience of the work and people who enjoy the mental experience of the work

Eh, physical and mental isn't the divide — it's more like people who enjoy code itself as a craft and people who simply see it as a means to an end (the application). Much like a writer might labor over their prose (the code) while telling a story (the application). Writing code is far more than the physical act of typing to those people.

martin-t|5 months ago

> I would love to see an anti-AI take that doesn't hinge on the idea that technology forces people to be lazy/careless/thoughtless.

Here's a couple points which are related to each other:

1) LLMs are statistical models of text (code being text). They can only exist because huge for-profit companies ingested a lot of code under proprietary, permissive and copyleft licenses, most of which at the very least require attribution, some reserve rights of the authors, some give extra rights to users.

LLM training mixes and repurposes the work of human authors in a way which gives them plausible deniability against any single author, yet the output is clearly only possible because of the input. If you trained an LLM on only google's source code, you'd be sued by google and it would almost certainly reproduce snippets which can be tracked down to google's code. But by taking way, way more input data, the blender cuts them into such fine pieces that the source is undetectable, yet the output is clearly still based on the labor of other people who have not been paid.

Hell, GPT3 still produced verbatim snippets of inverse square root and probably other well known but licensed code. And github has a checkbox which scans for verbatim matches so you don't accidentally infringe copyright by using copilot in a way which is provable. Which means they take extra care to make it unprovable.

If I "write a book" by taking an existing book but replacing every word with a synonym, it's still plagiarism and copyright infringement. It doesn't matter if the mechanical transformation is way more sophisticated, the same rules should apply.

2) There's no opt out. I stopped writing open source over a year ago when it became clear all my code is unpaid labor for people who are much richer than me and are becoming richer at a pace I can't match through productive work because they own assets which give them passive income. And there's no license I can apply which will stop this. I am not alone. As someone said, "Open-Source has turned into a form of unpaid internship"[0]. It might lead to a complete death of open source because nobody will want to see their work fed into a money printing machine (subscription based LLM services) and get nothing in return for their work.

> But if you like the doing, the typing, fiddling with knobs and configs, etc etc, all AI does is take the good part away.

I see quite the opposite. For me, what makes programming fun is deeply understanding a problem and coming up with a correct, clear to understand, elegant solution. But most problems a working programmer has are just variations of what other programmers had. The remaining work is prompting the LLMs in the right way that they produce this (describing the problem instead of thinking about its solutions) and debugging bugs LLMs generated.

A colleague vibe coded a small utility. It's useful but it's broken is so many ways, the UI falls apart when some text gets too long, labels are slightly incorrect and misleading, some text handle decimal numbers in weird ways, etc. With manually written code, a programmer would get these right the right time. Potential bugs become obvious as you're writing the code because you are thinking about it. But they do not occur to someone prompting an LLM. Now I can either fix them manually which is time consuming and boring, or I can try prompting an LLM about every single one which is less time consuming but more boring and likely to break something else.

Most importantly, using an LLM does not give me deeper understanding of the problem or the solution, it keeps knowledge locked in a black box.

[0]: https://aria.dog/barks/forklift-certified-license/

nchmy|5 months ago

Strongly agree with this

nenenejej|5 months ago

OK: AI is slow when using the said loop. AI is like poker. You bet with time. 60 seconds to type prompt and generate a response. Oh it is wrong ok let's gamble another 60 seconds...

At least when doing stuff the old way you learn something if you waste time.

That said AI is useful enough and some poker games are +EV.

So this is more caution-AI than anti-AI take. It is more an anti-vibe-koolaid take.

lukaslalinsky|5 months ago

This depends entirely on how you use said AI. You can have it read code, explain why was it done this or that way, and once it has the context you ask to think about implementing feature X. There is almost no gambling involved there, at best the level frustration you would have with a colleague. If you start from blank context, tell it to implement full app, you are purely just gambling.

trepaura|5 months ago

I'll gove you what you're asking for. Academic, genuine research has shown a clear result. AI is slower than an experienced engineer. It doesn't speed up the process because the loop you describe, it's terrible at it.