I used to share this sentiment but the more I used AI for programming, the less I enjoyed it. Even writing "boring" code (like tests or summaries) by hand increased my understanding of what I wrote and how it integrates into the rest of the codebase, which I think is fun.
Letting a robot write code for me, however tedious it would be to write manually, made me feel like I was working in someone else's codebase. It reminds me of launching a videogame and letting someone else play through the boring parts. I might as well not be playing. Why bother at all?
I understand this behaviour if you're working for a company on some miserable product, but not for personal projects.
I gain comprehension through the authoring process. I've always been weaker on the side of reviewing and only really gained an understanding of new tooling added by coworkers when I get to dig in and try to use it. This is absolutely a learning style thing and I have ADHD and have known since high-school that I am more engaged in the practical and have trouble with dry lecture style teaching - I have even excelled in pretty abstract and theoretical fields but it takes trying to work through problem solving, even if those problems are abstract and hard to mechanically represent.
So I am in the same boat, AI can write some good skeleton code for different purposes so I can get running faster but with anything complex and established it serves very little benefit. I'll end up spending more time trying to understand why and how it is doing something then I'd spend just doing it myself. When AI is a magical fix button that's awesome, but even in those circumstances I'm just buying LLM-debt - if I never need to touch that code again it's fine, but if I need to revise the code then I'll need to invest more time into understanding it and cleaning it up then I initially saved.
I'm not certain how much other folks are feeling this or if it's just me and the way my brain works, but I struggle to see the great savings outside of dead simple tasks.
AI stops coding being about the journey, and makes it about the destination. That is the polar opposite of most people's coding experience as a professional. Most developers are not about the destination, and often don't really care about the 'product', preferring to care about the code itself. They derive satisfaction from how they got to the end product instead of the end product itself.
For those developers who just want to build a thing to drive business value, or because they want a tool that they need, or because they think the end result will be fun to have, AI coding is great. It enables them to skip over (parts of) the tedious coding bit and get straight to the result bit.
If you're coding because you love coding then obviously skipping the coding bit is going to be a bad time.
I generally agree with you. As a recent father with a toddler, and two parents with a full time job, I’ve found that the only way I can make time for those personal side projects is to use AI to do most of the bootstrapping, and then do the final tweaks on my own. Most of this is around home automation, managing my Linux ISO server, among other things. But it certainly would be more fun and rewarding if I did it all myself.
My favourite code to write used to be small clean methods perhaps containing ~20 lines of logic. And I agree, it's fun coming up with ways to test that logic and seeing it pass those tests.
I'm not sure I'll ever write this kind of code again now. For months now all I've done is think about the higher level architectural decisions and prompt agents to write the actual code, which I find enjoyable, but architectural decisions are less clean and therefore for me less enjoyable. There's often a very clear good and bad way to right a method, but how you organise things at a higher level is much less binary. I rarely ever get that, "yeah, I've done a really good job there" feeling when making higher level decisions, but more of "eh, I think this is probably a good solution/compromise, given the requirements".
Not to be that curmudgeon (who am I kidding), but it's made reviewing code very much less enjoyable, and I review more changes than I write. Engineers merrily sending fixes they barely understand (or, worse, don't think they need to understand) for the rest of us to handle, and somehow lines-of-code has become a positive metric again. How convenient!
It has always been my opinion (and born out by our statistics internally, when counting self-review in the form of manual testing and automated test writing) that reviewing code (to the level of catching defects) often takes more time than actually building the solution. So I have a pretty big concern that the majority of AI code generation ends up adding time to tasks than it saves because it's optimizing the cheap tasks at the expense of the costly tasks.
I love everything about coding. I love architecting a system, and I love tending all the little details. I love to look at the system as a whole or a block of code in isolation and find nothing I want to change, and take pride in all of it. I also love making products.
LLM-agents have made making products, especially small ones, a lot easier, but sacrifice much of the crafting of details and, if the project is small enough, the architecture. I've certainly enjoyed using them a lot over the last year and a half, but I've come to really miss fully wrapping my head around a problem, having intimate knowledge of the details of the system, and taking pride in every little detail.
Me too, and I'm glad to see that this point keeps being brought up. I noticed that what shapes my satisfaction (or dissatisfaction) about working with AI depends on whether have understanding of what's being built or not.
For a prototype, it's pretty amazing to generate a working app with one or two prompts. But when I get serious about it, it becomes such a chore. The little papercuts start adding up, I lose speed as I deal with them, and the inner workings of the app becomes a foreign entity to me.
It's counterintuitive, but what's helping me enjoy coding is actually going slower with AI. I found out that my productivity gains are not on building faster, but learning faster and in a very targeted way.
The way I like to think about it is to split work into two broad categories - creative work and toil. Creative work is the type of work we want to continue doing. Toil is the work we want to reduce.
edit - an interesting facet of AI progress is that the split between these two types of work gets more and more granular. It has led me to actively be aware of what I'm doing as I work, and to critically examine whether certain mechanics are inherently toilistic or creative. I realized that a LOT of what I do feels creative but isn't - the manner in which I type, the way I shape and format code. It's more in the manner of catharsis than creation.
You cannot remove the toil without removing the creative work.
Just like how, in writing a story, a writer must also toil over each sentence, and should this be an emdash or a comma? and should I break the paragraph here or there? All this minutia is just as important to the final product as grand ideas and architecture are.
If you don't care about those little details, then fine. But you sacrifice some authorship of the program when you outsource those things to an agent. (And I would say, you sacrifice some quality as well).
> That includes code outside of the happy path, like error handling and input validation. But also other typing exercises like processing an entity with 10 different types, where each type must be handled separately. Or propagating one property through the system on 5 different types in multiple layers.
With AI, I feel I'm less caught up in the minutia of programming and have more cognitive space for the fun parts: engineering systems, designing interfaces and improving parts of a codebase.
I don't mind this new world. I was never too attached to my ability to pump out boilerplate at a rapid pace. What I like is engineering and this new AI world allows me to explore new approaches and connect ideas faster than I've ever been able to before.
This is the hidden super power of LLM - prototyping without attachment to the outcome.
Ten years ago, if you wanted to explore a major architectural decision, you would be bogged down for weeks in meetings convincing others, then a few more weeks making it happen. Then if it didn't work out, it feels like failure and everyone gets frustrated.
Now it's assumed you can make it work fast - so do it four different ways and test it empirically. LLMs bring us closer to doing actual science, so we can do away with all the voodoo agile rituals and high emotional attachment that used to dominate the decision process.
Are you not concerned that this world is deeply tied to you having an internet connection to one of a couple companies' servers? They can jack up the price, cut you off, etc.
Not going to last long though, at least not professionally. AI will do the spec and architecture too. The LLM will do the entire pipeline between customer or market research to deployment. This is already possible with bug fixes pretty much. And many features too depending on the business.
I suppose, in exactly the same way instant / frozen food makes cooking more enjoyable. If it was just a chore that you had to do, and now it's faster, sure, grab that cup-o-noodles. Knock yourself out.
Just don't expect to run a successful restaurant based on it.
A decade or two ago I remember an experiment where canned food was presented in a restaurant setting and people couldn't tell it apart from the hand-hooked. The presentation was what mattered, as long as it didn't look like it was canned/frozen then they thought it tasted like restaurant quality.
At what point do LLMs enable bad engineering practices, if instead of working to abstract or encapsulate toilsome programming tasks we point an expensive slot machine at them and generate a bunch of verbose code and carry on? I'm not sure where the tradeoff leads if there's no longer a pain signal for things that need to be re-thought or re-architected. And when anyone does create a new framework or abstraction, it doesn't have enough prior art for an LLM to adeptly generate, and fails to gain traction.
How much of "good engineering practices" exist because we're trying to make it easy for humans to work with the code?
Pick your favorite GoF design pattern. Is that they best way to do it for the computer or the best way to do it for the developer?
I'm just making this up now, maybe it's not the greatest example; but, let's consider the "visitor" pattern.
There's some framework that does a big loop and calls the visit() function on an object. If you want to add a new type, you inherit from that interface, put visit() on your function and all is well. From a "good" engineering practice, this makes sense to a developer, you don't have to touch much code and your stuff lives in it's own little area. That all feels right to us as developers because we don't have a big context window.
But what if your code was all generated code, and if you want to add a new type to do something that would have been done in visit(). You tell the LLM "add this new functionality to the loop for this type of object". Maybe it does a case statement and puts the stuff right in the loop. That "feels" bad if there's a human in the loop, but does it matter to the computer?
Yes, we're early LLMs aren't deterministic, and verification may be hard now. But that may change.
In the context of a higher-level language, y=x/3 and y=x/4 look the same, but I bet the generated assembly does a shift on the latter and a multiply-by-a-constant on the former. While the "developer interface", the source code, looks similar (like writing to a visitor pattern), the generated assembly will look different. Do we care?
Great Q, and your framing "there's no longer a pain signal for things that need to be re-thought or re-architected" perfectly encapsulates a concern I hadn't yet articulated so cleanly. Thanks for that!
Yes AI has taken away the tedium, but a lot of that could already be overcome by leveraging your text editing tools well or with basic code generation (such as being able generate the skeleton of a class from an interface).
And there was something nice about still having to put in the manual work in those cases. It let me process what the code is actually doing and gave me the opportunity to internalize it in a way that just doesn't happen with AI. It also sort of gave me a thinking break where I was engaged at just the right level to let the thoughts about the more interesting parts float around in my head. With AI writing all the code, I feel like I'm either fully engaged with those thoughts or not engaged at all. And that's a bit of a problem because aha moments often happen when the idea is in that middle area of thought.
AI also led me to experiment a bit more. In my case it helped remove the barrier to getting that initial bare-bones skeleton of code in a new environment by helping setting up libraries and a compile chain I was unfamiliar with and then giving me a baseline to build off of. Did you find that AI helped you evenly all the way through the experience or was it more helpful earlier or later on?
I’m working on library code in zig, and it’s very nice to have AI write the FFI interface with python. That’s not technically difficult or high risk, but it is tedious and boring.
Realistically having a helper to get me over slumps like that has been amazing for my personal productivity.
I've worked as both an IC and EM over the course of 25 years. The best part of being an IC is crafting solutions to single-human-sized problems—without needing to deal much with people. The best part of being an EM is directing the creation of larger solutions, sometimes vastly larger solutions, at the cost of dealing with people stuff.
AI takes the craft out of being an IC. IMO less enjoyable.
AI takes the human management out of being an EM. IMO way more enjoyable.
Now I can direct large-scope endeavors and 100% of my time is spent on product vision and making executive decisions. No sob stories. No performance reviews. Just pure creative execution.
Coding with AI is for those kids who were supposed to “stop playing games and clean their room right this minute”, but instead they shove all the crap in their closet and go back to playing games.
This level of detail isn't really helpful. I am working with AI and genuinely interested in learning more, but this offers very little.
More concrete examples to illustrate the core points would have been helpful. As-is the article doesn't offer much - sorry.
For one, I am not sure what kind of code he writes? How does he write tests? Are these unit tests, property-based tests? How does he quantify success? Leaves a lot to be desired.
I'm glad you enjoy it. I fucking hate it. Working directly with code is part of how I approach solving software problems.
What's worse, the more I rely on the bot, the less my internal model of the code base is reinforced. Every problem the bot solves, no matter how small, doesn't feel like a problem I solved and understanding I'd gained, it feels like I used a cheat code to skip the level. And passively reviewing the bot's output is no substitute for actively engaging with the code yourself. I can feel the brainrot set in bit by bit. It's like I'm Bastian making wishes on AURYN and losing a memory with every wish. I might get a raw-numbers productivity boost now, but at what cost later?
I get the feeling that the people who go on about how much fun AI coding is either don't actually enjoy programming or are engaging in pick-me behavior for companies with AI-use KPIs.
> write the first test so the AI knows how they should be written, and which cases should be tested. Then I tell the AI each test case and it writes them for me.
This is too low level. You’d be better off describing the things that need testing and asking for it to do red/green test-driven development (TDD). Then you’ll know all the tests are needed, and it’ll decide what tests to write without your intervention, and make them pass while you sip coffee :)
> I don’t trust it yet is when code must be copy pasted.
Ask it to perform the copy-paste using code - have it write and execute a quick script. You can review the script before it runs and that will make sure it can’t alter details on the way through.
This is the kind of person who would find more joy in telling an android to play a piano than practicing scales/arpeggios etc in order to play complicated music themselves.
I legitimately enjoy scale practice when I'm playing piano. In a similar way I've always found some pleasure in writing boilerplate and refactoring.
There is joy/peace and instructive value in the "boring" parts of almost every discipline. It's perhaps more meditative and subtle, but still very much there in abundance. And primes you much better for a real flow state.
Ever since AI exploded at my day job, I haven't legitimately been in anything resembling a programming flow state at work.
It made coding way different for me. I'm able to get a proof-of-concept for an idea up pretty quick, and then I have to go back and decide if I like the style it produced.
I feel more like a software producer or director than an engineer though.
AI made coding really enjoyable for me, for a subset of projects: Projects that I want, but don't really care about the design/implementation, or projects that has a lot of fiddly one-off configurations where it doesn't make sense to tuck in and learn all about the system if it is mostly set-it-and-forget-it. A lot of my home automation/home systems are now fully implemented by AI, because I don't really care how performant it is, or integrating all the various components, and it is very straight forward to tell if it works or if it doesn't work.
The split here is between AI as amplifier vs. AI as replacement. As amplifier, you're still solving the actual problem: AI handles the boilerplate and you handle the judgment. As replacement, you lose the feedback loop that makes you better over time. The developers who thrive will be the ones who know which problems still require them to be in the loop. That's a skill that takes deliberate practice and inuition to develop and almost no AI tooling is designed to teach that.
I like taking my code, especially sql stuff and asking AI a better way to write this and it actually is making me better at SQL because of sql methods i didnt know i could do before. I also like it in vscode to just quickly do the redundant things with auto-complete. now i have 15 years of programming behind me. If i had AI starting out, it would be very bad, I would lack understanding and be a incompetent programmer.
> The only thing where I don’t trust it yet is when code must be copy pasted. I can’t trace if it actually cuts and pastes code, or if the LLM brain is in between. In the latter case there may be tiny errors that I’d never find, so I’m not doing that. But maybe I’m paranoid.
imo, this isn't paranoid at all, and it very likely filters through the LLM, unless you provide a tool/skill and explicit instructions. Even then you're rolling the dice, and the diff will have to be checked.
I enjoy one specific fact about programming with Claude.
My work often entails tweaking, fixing, extending of some fairly complex products and libraries, and AI will explain various internal mechanisms and logic of those products to me while producing the necessary artifacts.
Sure my resulting understanding is shallow, but shallow precedes deep, and without an AI "tutor", the exploration would be a lot more frustrating and hit-and-miss.
You know what's worse than writing boilerplate, or logging code, or unit tests, or other generic, typically low value code? Reviewing it. All the supporting comments here suggest they let AI write it and you're done. This is significantly more dangerous than writing it yourself and not having a second review step; at least you had human eyes on it once.
I agree with the author but maybe it's bad to miss the pain you get on things like "propagating one property through the system on 5 different types in multiple layers".
These kind of pain points usually indicated too much of or a wrong architecture. Being able to fee these kind of things when the clanker does the work is a thing we must think about.
the copy paste concern is the most interesting bit honestly - even when it's not literally copy-pasting, AI error handling often looks correct but silently eats exceptions or returns wrong defaults. it gets the structure but misses what the code actually needs to do.
the boilerplate stuff is spot on though. the 10-type dispatch pattern is exactly where i gave up doing it manually
I hate writing proposals. It's the most mind numbing and repetitive work which also requires scrutinizing a lot of details.
But now I've built a full proposal pipeline, skills, etc that goes from "I want to create a proposal" (it collects all the info i need, creates a folder in google drive, I add all the supporting docs, and it generates a react page, uses code to calculate numbers in tables, and builds an absolutely beautiful react-to-pdf PDF file.
I have a comprehensive document outline all the work our company's ever done, made from analyzing all past proposals and past work in google drive, and the model references that when weaving in our past performance/clients.
It is wonderful. I can now just say things like "remove this module from the total cost" and without having to edit various parts of the document (like with hand-editing code). Claude (or anything else) will just update the "code" for the proposal (which is a JSON file) and the new proposal is ready, with perfect formatting, perfect numbers, perfect tables, everything.
So I can stay high level thinking about "analyze this module again, how much dev time would we need?" etc. and it just updates things.
If you'd like me to do something like this with your company, get in touch :) I'm starting to think (as of this week) others will benefit from this too and can be a good consulting engagement.
Uh, no. The happy path is the easy part with little to no thinking required. Edge cases and error handling is where we have to think hardest and learn the most.
Sure, but please just flag submissions rather than posting comments like this. The guidelines are explicit about this:
Please don't complain that a submission is inappropriate. If a story is spam or off-topic, flag it.
If a story hangs around on the front page even after you've flagged it, you can always email us (hn@ycombinator.com) and we'll take a look. That will get our attention much more quickly than a comment.
"What I find annoying is repetitive stuff that's just typing"
..
"Where I can't trust AI is if it needs to copy paste / duplicate code"
???
AI takes away the "boring", "tedius" parts of coding for you, yet you at the same time don't trust it to even just duplicate code from one place to another?
The creative vs toil split resonates, but I think there's a third category everyone misses: the connective tissue. The glue code, the error handling, the edge cases that aren't creative but teach you how things actually break.
I run 17 products as an indie maker. AI absolutely helps me ship faster — I can prototype in hours what used to take days. But the understanding gap is real. I've caught myself debugging AI-generated code where I didn't fully grok the failure mode because I didn't write the happy path.
My compromise: I let AI handle the first pass on boilerplate, but I manually write anything that touches money, auth, or data integrity. Those are the places where understanding isn't optional.
zooi|11 days ago
Letting a robot write code for me, however tedious it would be to write manually, made me feel like I was working in someone else's codebase. It reminds me of launching a videogame and letting someone else play through the boring parts. I might as well not be playing. Why bother at all?
I understand this behaviour if you're working for a company on some miserable product, but not for personal projects.
munk-a|11 days ago
So I am in the same boat, AI can write some good skeleton code for different purposes so I can get running faster but with anything complex and established it serves very little benefit. I'll end up spending more time trying to understand why and how it is doing something then I'd spend just doing it myself. When AI is a magical fix button that's awesome, but even in those circumstances I'm just buying LLM-debt - if I never need to touch that code again it's fine, but if I need to revise the code then I'll need to invest more time into understanding it and cleaning it up then I initially saved.
I'm not certain how much other folks are feeling this or if it's just me and the way my brain works, but I struggle to see the great savings outside of dead simple tasks.
onion2k|11 days ago
AI stops coding being about the journey, and makes it about the destination. That is the polar opposite of most people's coding experience as a professional. Most developers are not about the destination, and often don't really care about the 'product', preferring to care about the code itself. They derive satisfaction from how they got to the end product instead of the end product itself.
For those developers who just want to build a thing to drive business value, or because they want a tool that they need, or because they think the end result will be fun to have, AI coding is great. It enables them to skip over (parts of) the tedious coding bit and get straight to the result bit.
If you're coding because you love coding then obviously skipping the coding bit is going to be a bad time.
some-guy|11 days ago
kypro|11 days ago
I'm not sure I'll ever write this kind of code again now. For months now all I've done is think about the higher level architectural decisions and prompt agents to write the actual code, which I find enjoyable, but architectural decisions are less clean and therefore for me less enjoyable. There's often a very clear good and bad way to right a method, but how you organise things at a higher level is much less binary. I rarely ever get that, "yeah, I've done a really good job there" feeling when making higher level decisions, but more of "eh, I think this is probably a good solution/compromise, given the requirements".
empath75|11 days ago
co_king_5|11 days ago
[deleted]
mrwh|11 days ago
munk-a|11 days ago
charcircuit|11 days ago
furyofantares|11 days ago
LLM-agents have made making products, especially small ones, a lot easier, but sacrifice much of the crafting of details and, if the project is small enough, the architecture. I've certainly enjoyed using them a lot over the last year and a half, but I've come to really miss fully wrapping my head around a problem, having intimate knowledge of the details of the system, and taking pride in every little detail.
xaviervn|11 days ago
For a prototype, it's pretty amazing to generate a working app with one or two prompts. But when I get serious about it, it becomes such a chore. The little papercuts start adding up, I lose speed as I deal with them, and the inner workings of the app becomes a foreign entity to me.
It's counterintuitive, but what's helping me enjoy coding is actually going slower with AI. I found out that my productivity gains are not on building faster, but learning faster and in a very targeted way.
danielvaughn|11 days ago
edit - an interesting facet of AI progress is that the split between these two types of work gets more and more granular. It has led me to actively be aware of what I'm doing as I work, and to critically examine whether certain mechanics are inherently toilistic or creative. I realized that a LOT of what I do feels creative but isn't - the manner in which I type, the way I shape and format code. It's more in the manner of catharsis than creation.
matthewkayin|11 days ago
Just like how, in writing a story, a writer must also toil over each sentence, and should this be an emdash or a comma? and should I break the paragraph here or there? All this minutia is just as important to the final product as grand ideas and architecture are.
If you don't care about those little details, then fine. But you sacrifice some authorship of the program when you outsource those things to an agent. (And I would say, you sacrifice some quality as well).
sergiomattei|11 days ago
> That includes code outside of the happy path, like error handling and input validation. But also other typing exercises like processing an entity with 10 different types, where each type must be handled separately. Or propagating one property through the system on 5 different types in multiple layers.
With AI, I feel I'm less caught up in the minutia of programming and have more cognitive space for the fun parts: engineering systems, designing interfaces and improving parts of a codebase.
I don't mind this new world. I was never too attached to my ability to pump out boilerplate at a rapid pace. What I like is engineering and this new AI world allows me to explore new approaches and connect ideas faster than I've ever been able to before.
perrygeo|11 days ago
This is the hidden super power of LLM - prototyping without attachment to the outcome.
Ten years ago, if you wanted to explore a major architectural decision, you would be bogged down for weeks in meetings convincing others, then a few more weeks making it happen. Then if it didn't work out, it feels like failure and everyone gets frustrated.
Now it's assumed you can make it work fast - so do it four different ways and test it empirically. LLMs bring us closer to doing actual science, so we can do away with all the voodoo agile rituals and high emotional attachment that used to dominate the decision process.
awepofiwaop|11 days ago
fogzen|11 days ago
MarkusQ|11 days ago
Just don't expect to run a successful restaurant based on it.
Izkata|11 days ago
weirdmantis69|10 days ago
lsy|11 days ago
michaelrpeskin|11 days ago
Pick your favorite GoF design pattern. Is that they best way to do it for the computer or the best way to do it for the developer?
I'm just making this up now, maybe it's not the greatest example; but, let's consider the "visitor" pattern.
There's some framework that does a big loop and calls the visit() function on an object. If you want to add a new type, you inherit from that interface, put visit() on your function and all is well. From a "good" engineering practice, this makes sense to a developer, you don't have to touch much code and your stuff lives in it's own little area. That all feels right to us as developers because we don't have a big context window.
But what if your code was all generated code, and if you want to add a new type to do something that would have been done in visit(). You tell the LLM "add this new functionality to the loop for this type of object". Maybe it does a case statement and puts the stuff right in the loop. That "feels" bad if there's a human in the loop, but does it matter to the computer?
Yes, we're early LLMs aren't deterministic, and verification may be hard now. But that may change.
In the context of a higher-level language, y=x/3 and y=x/4 look the same, but I bet the generated assembly does a shift on the latter and a multiply-by-a-constant on the former. While the "developer interface", the source code, looks similar (like writing to a visitor pattern), the generated assembly will look different. Do we care?
chrisweekly|11 days ago
ziml77|10 days ago
And there was something nice about still having to put in the manual work in those cases. It let me process what the code is actually doing and gave me the opportunity to internalize it in a way that just doesn't happen with AI. It also sort of gave me a thinking break where I was engaged at just the right level to let the thoughts about the more interesting parts float around in my head. With AI writing all the code, I feel like I'm either fully engaged with those thoughts or not engaged at all. And that's a bit of a problem because aha moments often happen when the idea is in that middle area of thought.
slibhb|11 days ago
I'm excited to work on more things that I've been curious about for a long time but didn't have the time/energy to focus on.
munk-a|11 days ago
data-ottawa|11 days ago
I’m working on library code in zig, and it’s very nice to have AI write the FFI interface with python. That’s not technically difficult or high risk, but it is tedious and boring.
Realistically having a helper to get me over slumps like that has been amazing for my personal productivity.
enduser|11 days ago
AI takes the craft out of being an IC. IMO less enjoyable.
AI takes the human management out of being an EM. IMO way more enjoyable.
Now I can direct large-scope endeavors and 100% of my time is spent on product vision and making executive decisions. No sob stories. No performance reviews. Just pure creative execution.
righthand|11 days ago
bitwize|11 days ago
wy1981|11 days ago
More concrete examples to illustrate the core points would have been helpful. As-is the article doesn't offer much - sorry.
For one, I am not sure what kind of code he writes? How does he write tests? Are these unit tests, property-based tests? How does he quantify success? Leaves a lot to be desired.
imzadi|11 days ago
bitwize|11 days ago
What's worse, the more I rely on the bot, the less my internal model of the code base is reinforced. Every problem the bot solves, no matter how small, doesn't feel like a problem I solved and understanding I'd gained, it feels like I used a cheat code to skip the level. And passively reviewing the bot's output is no substitute for actively engaging with the code yourself. I can feel the brainrot set in bit by bit. It's like I'm Bastian making wishes on AURYN and losing a memory with every wish. I might get a raw-numbers productivity boost now, but at what cost later?
I get the feeling that the people who go on about how much fun AI coding is either don't actually enjoy programming or are engaging in pick-me behavior for companies with AI-use KPIs.
cadamsdotcom|11 days ago
This is too low level. You’d be better off describing the things that need testing and asking for it to do red/green test-driven development (TDD). Then you’ll know all the tests are needed, and it’ll decide what tests to write without your intervention, and make them pass while you sip coffee :)
> I don’t trust it yet is when code must be copy pasted.
Ask it to perform the copy-paste using code - have it write and execute a quick script. You can review the script before it runs and that will make sure it can’t alter details on the way through.
stevenbhemmy|11 days ago
I legitimately enjoy scale practice when I'm playing piano. In a similar way I've always found some pleasure in writing boilerplate and refactoring.
There is joy/peace and instructive value in the "boring" parts of almost every discipline. It's perhaps more meditative and subtle, but still very much there in abundance. And primes you much better for a real flow state.
Ever since AI exploded at my day job, I haven't legitimately been in anything resembling a programming flow state at work.
butterisgood|11 days ago
I feel more like a software producer or director than an engineer though.
IncreasePosts|11 days ago
mbh159|11 days ago
ei8ths|10 days ago
daveguy|11 days ago
imo, this isn't paranoid at all, and it very likely filters through the LLM, unless you provide a tool/skill and explicit instructions. Even then you're rolling the dice, and the diff will have to be checked.
inglor_cz|11 days ago
My work often entails tweaking, fixing, extending of some fairly complex products and libraries, and AI will explain various internal mechanisms and logic of those products to me while producing the necessary artifacts.
Sure my resulting understanding is shallow, but shallow precedes deep, and without an AI "tutor", the exploration would be a lot more frustrating and hit-and-miss.
skeeter2020|11 days ago
yomismoaqui|11 days ago
These kind of pain points usually indicated too much of or a wrong architecture. Being able to fee these kind of things when the clanker does the work is a thing we must think about.
unknown|11 days ago
[deleted]
the_harpia_io|11 days ago
the boilerplate stuff is spot on though. the 10-type dispatch pattern is exactly where i gave up doing it manually
ertucetin|11 days ago
HeavyStorm|11 days ago
tqlpasj|11 days ago
https://lighthouseapp.io/blog/introducing-lighthouse
It looks like a vibe coded website.
atonse|11 days ago
I hate writing proposals. It's the most mind numbing and repetitive work which also requires scrutinizing a lot of details.
But now I've built a full proposal pipeline, skills, etc that goes from "I want to create a proposal" (it collects all the info i need, creates a folder in google drive, I add all the supporting docs, and it generates a react page, uses code to calculate numbers in tables, and builds an absolutely beautiful react-to-pdf PDF file.
I have a comprehensive document outline all the work our company's ever done, made from analyzing all past proposals and past work in google drive, and the model references that when weaving in our past performance/clients.
It is wonderful. I can now just say things like "remove this module from the total cost" and without having to edit various parts of the document (like with hand-editing code). Claude (or anything else) will just update the "code" for the proposal (which is a JSON file) and the new proposal is ready, with perfect formatting, perfect numbers, perfect tables, everything.
So I can stay high level thinking about "analyze this module again, how much dev time would we need?" etc. and it just updates things.
If you'd like me to do something like this with your company, get in touch :) I'm starting to think (as of this week) others will benefit from this too and can be a good consulting engagement.
erelong|9 days ago
lbrito|11 days ago
Uh, no. The happy path is the easy part with little to no thinking required. Edge cases and error handling is where we have to think hardest and learn the most.
jeandejean|11 days ago
tomhow|11 days ago
Please don't complain that a submission is inappropriate. If a story is spam or off-topic, flag it.
If a story hangs around on the front page even after you've flagged it, you can always email us (hn@ycombinator.com) and we'll take a look. That will get our attention much more quickly than a comment.
lpeancovschi|11 days ago
frizlab|11 days ago
thegrim33|10 days ago
..
"Where I can't trust AI is if it needs to copy paste / duplicate code"
???
AI takes away the "boring", "tedius" parts of coding for you, yet you at the same time don't trust it to even just duplicate code from one place to another?
chrisjj|11 days ago
...just not for users.
cranberryturkey|11 days ago
I run 17 products as an indie maker. AI absolutely helps me ship faster — I can prototype in hours what used to take days. But the understanding gap is real. I've caught myself debugging AI-generated code where I didn't fully grok the failure mode because I didn't write the happy path.
My compromise: I let AI handle the first pass on boilerplate, but I manually write anything that touches money, auth, or data integrity. Those are the places where understanding isn't optional.
xyzsparetimexyz|11 days ago
[deleted]