Ask HN: How is AI-assisted coding going for you professionally?
If you've recently used AI tools for professional coding work, tell us about it.
What tools did you use? What worked well and why? What challenges did you hit, and how (if at all) did you solve them?
Please share enough context (stack, project type, team size, experience level) for others to learn from your experience.
The goal is to build a grounded picture of where AI-assisted development actually stands in March 2026, without the hot air.
[+] [-] viccis|18 days ago|reply
It has also enabled a few people to write code or plan out implementation details who haven't done so in a long (sometimes decade or more) time, and so I'm getting some bizarre suggestions.
Otherwise, it really does depend on what kind of code. I hand write prod code, and the only thing that AI can do is review it and point out bugs to me. But for other things, like a throwaway script to generate a bunch of data for load testing? Sure, why not.
[+] [-] hdhdhsjsbdh|18 days ago|reply
At work, the devs up the chain now do everything with AI – not just coding – then task me with cleaning it up. It is painful and time consuming, the code base is a mess. In one case I had to merge a feature from one team into the main code base, but the feature was AI coded so it did not obey the API design of the main project. It also included a ton of stuff you don’t need in the first pass - a ton of error checking and hand-rolled parsing, etc, that I had to spend over a week unrolling so that I could trim it down and redesign it to work in the main codebase. It was a slog, and it also made me look bad because it took me forever compared to the team who originally churned it out almost instantly. AI tools are not good at this kind of design deconflicting task, so while it’s easy to get the initial concept out the gate almost instantly, you can’t just magically fit it into the bigger codebase without facing the technical debt you’ve generated.
In my personal projects, I get to experience a bit of the fun I think others are having. You can very quickly build out new features, explore new ideas, etc. You have to be thoughtful about the design because the codebase can get messy and hard to build on. Often I design the APIs and then have Claude critique them and implement them.
I think the future is bleak for people in my spot professionally – not junior, but also not leading the team. I think the middle will be hollowed out and replaced with principals who set direction, coordinate, and execute. A privileged few will be hired and developed to become leaders eventually (or strike gold with their own projects), but everyone in between is in trouble.
[+] [-] ramraj07|18 days ago|reply
[+] [-] theshrike79|18 days ago|reply
It's just plain unprofessional to just YOLO shit with AI and force actual humans to read to code even if the "author" hasn't read it.
Also API design etc. should be automatically checked by tooling and CI builds, and thus PR merges, should be denied until the checks pass.
[+] [-] dude250711|18 days ago|reply
The hell you are playing hero for? Delegate the choice to manager: ruin the codebase or allocate two weeks for clean-up - their choice. If the magical AI team claim they can do integration faster - let them.
[+] [-] phyzix5761|18 days ago|reply
If they're handing you broken code call them out on it. Say this doesn't do what it says it does, did you want me to create a story for redoing all this work?
[+] [-] AnimalMuppet|18 days ago|reply
[+] [-] suzzer99|18 days ago|reply
This has to be the most thankless job for the near future. It's hard and you get about as much credit as the worker who cleans up the job site after the contractors are done, even though you're actually fixing structural defects.
And god forbid you introduce a regression bug cleaning up some horrible redundant spaghetti code.
[+] [-] ehnto|18 days ago|reply
That is on the people using the AI and not cleaning up/thinking about it at all.
[+] [-] dawnerd|18 days ago|reply
[+] [-] visarga|18 days ago|reply
[+] [-] jf22|16 days ago|reply
[+] [-] fastasucan|18 days ago|reply
Last year I was working on implementing a pretty big feature in our codebase, it required a lot of focus to get the business logic right and at the same time you had be very creative to make this feasible to run without hogging to much resources.
When I was nearly done and worked on catching bugs, team members grew tired of waiting and starting taking my code from x weeks ago (I have no idea why), feeding it to Claude or whatever and then came back with a solution. So instead of me finishing my code I had to go through their version of my code.
Each one of the proposals had one or more business requirements wrong and several huge bugs. Not one was any closer to a solution than mine was.
I had appreciated any contribution to my code, but thinking that it would be so easy to just take my code and finishing it by asking Claude was rather insulting.
[+] [-] Izkata|18 days ago|reply
I know my mind fairly well, and I know my style of laziness will result in atrophying skills. Better not to risk it.
One of my co-workers already admitted as much to me around six months ago, and that he was trying not to use AI for any code generation anymore, but it was really difficult to stop because it was so easy to reach for. Sounded kind of like a drug addiction to me. And I had the impression he only felt comfortable admitting it to me because I don't make it a secret that I don't use it.
Another co-worker did stop using it to generate code because (if I'm remembering right) he can tell what it generates is messy for long-term maintenance, even if it does work and even though he's new to React. He still uses it often for asking questions.
A third (this one a junior) seemed to get dumber over the past year, opening merge request that didn't solve the problem. In a couple of these cases my manager mentioned either seeing him use AI while they were pairing (and it looked good enough so the problems just slipped by) or saw hints in the merge request with how AI names or structures the code.
[+] [-] Xcelerate|18 days ago|reply
I don't use it at all to program despite that being my day job for exactly the reason you mentioned. I know I'll totally forget how to program. During a tight crunch period, I might use it as a quick API reference, but certainly not to generate any code. (Absolutely not saying it's not useful for this purpose—I just know myself well enough to know how this is going to go haha)
[+] [-] tim-tday|18 days ago|reply
I started using it for things I hate, ended up using it everywhere. I move 5x faster. I follow along most of the time. Twice a week I realize I’ve lost the thread. Once a month it sets me back a week or more.
[+] [-] tehjoker|18 days ago|reply
[+] [-] philipp-gayret|18 days ago|reply
> he can tell what it generates is messy for long-term maintenance, even if it does work and even though he's new to React.
When one can generate code in such a short amount of time, logically it is not hard to maintain. You could just re-generate it if you didn't like it. I don't believe this style of argument where it's easy to generate with AI but then you cannot maintain it after. It does not hold up logically, and I have yet to see such a codebase where AI was able to generate it, but now cannot maintain it. What I have seen this year is feature-complete language and framework rewrites done by AI with these new tools. For me the unmaintainable code claim is difficult to believe.
[+] [-] unknown|18 days ago|reply
[deleted]
[+] [-] onlyrealcuzzo|18 days ago|reply
Professionally, I have had almost no luck with it, outside of summarizing design docs or literally just finding something in the code that a simple search might not find: such is this team's code that does X?
I am yet to successfully prompt it and get a working commit.
Further, I will add that I also don't know any ICs personally who have successfully used it. Though, there's endless posts of people talking about how they're now 10x more productive, and everyone needs to do x y an z now. I just don't know any of these people.
Non-professionally, it's amazing how well it does on a small greenfield task, and I have seen that 10x improvement in velocity. But, at work, close to 0 so far.
Of the posts I've seen at work, they typically tend to be teams doing something new / greenfield-ish or a refactor. So I'm not surprised by their results.
[+] [-] tim-tday|18 days ago|reply
I’ve probably prompted 10,000 lines of working code in the last two months. I started with terraform which I know backwards and forwards. Works perfectly 95% of the time and I know where it will go wrong so I watch for that. (Working both green field, in other existing repos and with other collaborators)
Moved on to a big data processing project, works great, needed a senior engineer to diagnose one small index problem which he identified in 30s. (But I’d bonked on for a week because in some cases I just don’t know what I don’t know)
Meanwhile a colleague wanted a sample of the data. Vibe coded that. (Extract from zip without decompressing) He wanted randomized. One shot. Done. Then he wanted randomized across 5 categories. Then he wanted 10x the sample size. Data request completed before the conversion was over. I would have worked on that for three hours before and bonked if I hit the limit of my technical knowledge.
Built a monitoring stack. Configured servers, used it to troubleshoot dozens of problems.
For stuff I can’t do, now I can do. For stuff I could do with difficulty now I can do with ease. For stuff I could do easily now I can do fast and easy.
Your vastly different experience is baffling and alien to me. (So thank you for opening my eyes)
[+] [-] humbleharbinger|18 days ago|reply
Most of my gripes are with the harness, CC is way better.
In terms of productivity I'm def 2-4X more productive at work, >10x more productive on my side business. I used to work overtime to deliver my features. Now I work 9-5 and am job hunting on the side while delivering relatively more features.
I think a lot of people are missing that AI is not just good for writing code. It's good for data analysis and all sorts of other tasks like debugging and deploying. I regularly use it to manage deployment loops (ex. make a code change and then deploy the changes to gamma and verify they work by making a sample request and verifying output from cloudwatch logs etc). I have built features in 2 weeks that would take me a month just because I'd have to learn some nitty technical details that I'd never use again in my life.
For data analysis I have an internal glue catalog, I can just tell it to query data and write a script that analyzes X for me.
AI and agents particularly have been a huge boon for me. I'm really scared about automation but also it doesn't make sense to me that SWE would be automated first before other careers since SWE itself is necessary to automate others. I think there are some fundamental limitations on LLMs (without understanding the details too much), but whatever level of intelligence we've currently unlocked is fundamentally going to change the world and is already changing how SWE looks.
[+] [-] jgilias|18 days ago|reply
[+] [-] hn_throwaway_99|18 days ago|reply
In the bucket of "really great things I love about AI", that would definitely be at the top. So often in my software engineering career I'd have to spend tons of time learning and understanding some new technology, some new language, some esoteric library, some cobbled-together build harness, etc., and I always found it pretty discouraging when I knew that I'd never have reason to use that tech outside the particular codebase I was working on at that time. And far from being rare, I found that working in a fairly large company that that was a pretty frequent occurrence. E.g. I'd look at a design doc or feature request and think to myself "oh, that's pretty easy and straightforward", only to go into the codebase and see the original developer/team decided on some extremely niche transaction handling library or whatever (or worse, homegrown with no tests...), and trying to figure out that esoteric tech turned into 85% of the actual work. AI doesn't reduce that to 0, but I've found it has been a huge boon to understanding new tech and especially for getting my dev environment and build set up well, much faster than I could do manually.
Of course, AI makes it a lot easier to generate exponentially more poorly architected slop, so not sure if in a year or two from now I'll just be ever more dependent on AI explaining to me the mountains of AI slop created in the first place.
[+] [-] clintonb|18 days ago|reply
This has been a godsend over the past week while deploying a couple services. One is a bridge between Linear and our Coder.com installation so folks can assign the work to an agent. Claude Code can do most of the work while I sleep since it has access to kubectl, Linear MCP, and Coder MCP. I no longer have to manually build, deploy, test, repeat. It just does it all for me!
[+] [-] simonreiff|18 days ago|reply
[+] [-] 3form|17 days ago|reply
[+] [-] gerdesj|18 days ago|reply
Sanctioned comment?
[+] [-] GeoSys|18 days ago|reply
[deleted]
[+] [-] wk_end|18 days ago|reply
This year I grudgingly bit the bullet and began using AI tools, and to my dismay they've been a pretty big boon for me, in this case. Not just for code generation - they're really good at probing the monolith and answering questions I have about how it works. Before I'd spend days pouring over code before starting work to figure out the right way to build something or where to break in, pinging people over in India or eastern Europe with questions and hoping they reply to me overnight. AI's totally replaced that, and it works shockingly well.
When I do fall back on it for code generation, it's mostly just to mitigate the tedium of writing boilerplate. The code it produces tends to be pretty poor - both in terms of style and robustness - and I'll usually need to take at least a couple of passes over it to get it up to snuff. I do find this faster than writing everything out by hand in the end, but not by a lot.
For my personal projects I don't find it adds much, but I do enjoy rubber ducking with ChatGPT.
[+] [-] simonw|18 days ago|reply
I'm enjoying myself so much. Projects I've been thinking about for years are now a couple of hours of hacking around. I'm readjusting my mental model of what's possible as a single developer. And I'm finally learning Go!
The biggest challenge right now is keeping up with the review workload. For low stakes projects (small single-purpose HTML+JS tools for example) I'm comfortable not reviewing the code, but if it's software I plan to have other people use I'm not willing to take that risk. I have a stack of neat prototypes and maybe-production-quality features that I can't ship yet because I've not done that review work.
I mainly work as an individual or with one other person - I'm not working as part of a larger team.
[+] [-] QuadrupleA|18 days ago|reply
A couple "win" examples: add in-text links to every term in this paragraph that appears elsewhere on the page, plus corresponding anchors in the relevant page parts. Or, replace any static text on this page with any corresponding dynamic elements from this reference URL.
Lose examples: constant, but edit format glitches (not matching searched text; even the venerable Opus 4.6 constantly screws this up), unnecessary intermediate variables, ridiculously over-cautious exception-handling, failing to see opportunities to isolate repeated code into a function, or to utilize an existing function that exactly implements said N lines of code, etc.
[+] [-] slurpyb|18 days ago|reply
And then all of a sudden you’re just arguing with the terminal all day - the specs are written by gpt, delivered in-the email written by gpt. Sometimes they dont even have the time to slice their prompt from the edges of the paste but the only thing i can think of is “i need to make the most of 0.5x off peak claude rates “
Fuck.
I got lots of pretty TUIs though so thats neat
[+] [-] vemv|18 days ago|reply
It seems to me that sadly, paying for getting a few isolated tasks done is becoming a thing of the past.
[+] [-] lazy_afternoons|18 days ago|reply
I have 10 years of experience. I am a reasonable engineer. I can tell you that about half of the hype on twitter is real. It is a real blessing for small teams.
We have 100k DAU for a consumer crud app. We built and maintain everything in-house with 3 engineers. This would have taken atleast 10 engineers 3-4 years back.
We don't have a bug list. We are not "vibe coding" , 2 of us understand almost all of the codebase. We have processes to make sure the core integrity of codebase doesn't go for a toss.
None has touched the editor in months.
Even the product folks can raise a PR for small config changes from slack.
Velocity is through the roof and code quality is as good if not better than when we write by hand.
We refactor almost A LOT more than before because we can afford to.
I love it.
[+] [-] greenpizza13|18 days ago|reply
We have cursor with essentially unlimited Opus 4.6 and it’s fundamentally changed my workflow as a senior engineer. I find I spend much more time designing and testing my software and development time is almost entirely prompting and reviewing AI changes.
I’m afraid my coding skills are atrophying, in fact I know the are, but I’m not sure if the coding was the part of my job I truly enjoyed. I enjoy thinking higher-level: architecture, connecting components, focusing on the user experience. But I think using these AI tools is a form of golden handcuffs. If I go work at a startup without the money I pay for these models, I think for the first time in my career I would be less likely to be able to successfully code a feature than I could last year.
So professionally there are pros and cons. My design and architecture skills have greatly improved as I am spending more time doing this.
Personally it’s so much fun. I’ve made several side projects I would have never done otherwise. Working with Claude code on greenfield projects is a blast.
[+] [-] kreyenborgi|18 days ago|reply
[+] [-] christophilus|18 days ago|reply
The one thing I’m not sure about is: does code quality and consistency actually matter? If your architecture is sufficiently modular, you can quickly and inexpensively regenerate any modules whose low quality proves to be problematic.
So, maybe we really are fucked. I don’t know.
[+] [-] Brystephor|18 days ago|reply
We got broad and wide access to AI tools maybe a month ago now. AI tools meaning claude code, codex, cursor and a set of other random AI tools.
I use them very often. They've taken a lot of the fun and relaxing parts of my job away and have overall increased my stress. I am on the product side of the business and it feels necessary for me to have 10 new ideas and now the ones with the most ideas will be rewarded, which I am not as good at. Ive tried having the agents identify opportunities for infra improvements and had no good luck there. I haven't tried it for product suggestions but I think it would be poor at that too.
I get sent huge PRs and huge docs now that I wasnt sent before with pressure to accept them as is.
I write code much faster but commit it at the same pace due to reviews taking so long. I still generate single task PRs to keep them reviewable and do my own thorough review before hand. I always have an idea in ny head about how it should work before getting started, and I push the agent to use my approach. The AI tools are good at catching small bugs, like mutating things across threads. I like to use it to generate plans for implementation (that only I and the bots read, I still handwrite docs that are broadly shared and referenced).
Overall, AI has me nervous. Primarily because it does the parts that I like very well and has me spending a higher portion of my job on the things I dont like or find more tiresome.
[+] [-] michaelteter|17 days ago|reply
I have a lot of experience, low and high level. These AI tools allow me to "discuss" possibilities, research approaches, and test theories orders of magnitude faster than I could in the past.
I would roughly estimate that my ability to produce useful products is at least 20x. A good bit of that 'x' is because of the elimination of mental barriers. There have always been good ideas I had which I knew could work, but I also knew that to prove that they could work would take a lot of focus and research (leveling up on specific things). And that takes human energy - while I'm busy also trying to do good things in my day job.
Now I have immensely powerful minions and research assistants. I can test any theory I have in an hour or less.
While these minions are being subsidized in the wonderful VC way, I can get a lot of done. If the real costs start to bleed through, I'll have to scale back my explorations. (Because at a point, I'll have to justify testing my theories against spending 2-300$.)
To your questions, I'm usually a solo builder anyway. I've built serious things for serious companies, but almost always solo. So that's quite a burden. And now I'm weary of all that corporate stuff, so I build for myself. And what a joy it is, having these powertools.
If I were in a company right now, I could absolutely replace a team of 5 people with me + AI... assuming the CTO wasn't the (usual) limiting factor.
[+] [-] notatoad|18 days ago|reply
Copilot completions are amazingly useful. chatting with the chatbot is a super useful debugging tool. Giving it a function or database query and asking the ai to optimize it works great. But true vibe coding is still, imho, more of a party trick than an actual productivity multiplier. It can do things that look useful, and it can do things that solve immediate self-contained problems. but it can’t create launchable products that serve the needs of multiple users.
[+] [-] viktorianer|14 days ago|reply
[deleted]
[+] [-] wg0|18 days ago|reply
[+] [-] ares623|18 days ago|reply
[+] [-] VoidWhisperer|18 days ago|reply
The productivity comes from three main areas for me:
- Having the AI coding assistance write unit tests for my changes. This used to be by far my least favorite part of my job of writing software, mostly because instead of solving problems, it was the monotonous process of gathering mock data to generate specific pathways, trying to make sure I'm covering all the cases, and then debugging the tests. AI coding assistance allows me to just have to review the tests to make sure that they cover all the cases I can think of and that there aren't any overtly wrong assumptions
- Research. It has been extraordinarily helpful in giving me insight into how to design some larger systems when I have extremely specific requirements but don't necessarily have the complete experience to architect them myself - I know enough to understand if the system is going to correctly accomplish the requirements, but not to have necessarily come up with architecture as a whole
- Quick test scripts. It has been extremely useful for generating quick SQL data for testing things, along with quick one-off scripts to test things like external provider APIs
[+] [-] ivraatiems|18 days ago|reply
I agree, this is where coding agents really shine for me. Even if they get the details wrong, they often pinpoint where things happen and how quite well.
They're also great for rapid debugging, or assisted bug fixing. Often, I will manually debug a problem, then tell the AI, "This exception occurs in place Y because thing X is happening, here's a stack trace, propose a fix", and then it will do the work of figuring out where to put the fix for me. I already usually know WHAT to do, it's just a matter of WHERE in context. Saves a lot of time.
Likewise, if I have something where I want thing X to do Y, and X already does Z, then I'll say, "Implement a Y that works like Z but for A B C", and it'll usually get it really close on the first try.
[+] [-] adelie|18 days ago|reply
my team has largely avoided AI; our sister team has been quite gungho on it. i recently handed off a project to them that i'd scoped at about one sprint of work. they returned with a project design that involved four microservices, five new database tables, and an entirely new orchestration and observability layer. it took almost a week of back-and-forth to pare things down.
since then, they've spent several sprints delivering PRs that i now have to review. there's lots of things that don't work, don't make sense, or reinvent things we already have from scratch. almost half the code is dedicated to creating 'reusable' and 'modular' classes (read: boilerplate) for a project that was distinctly scoped as a one-off. as a result, this takes hours, and it's cut into my own sprint velocity. i'm doing all the hard work but receiving none of the credit.
management just told me that every engineer is now required to use AI. i'm tired.
[+] [-] drrob|18 days ago|reply
The suggestions are correct about 40% of the time, so I'm actually surprised when they're right, rather than becoming reliant on them. It saves me maybe 10 minutes a day.
[+] [-] scuff3d|18 days ago|reply
[+] [-] robbbbbbbbbbbb|18 days ago|reply
We use a mix of agentic and conversational tools, just pick your own and go with it.
For Unity development (our main codebase and source of value) I give current gen tools a C- for effectiveness. For solving confined, well modularisable problems (eg refactor this texture loader; implement support for this material extension) it’s good. For most real day to day problems it’s hopelessly confused by the large codebase full of state, external dependency on chunks of Unity, implicit hardware-dependent behaviours, etc. It has no idea how to work meaningfully with Unity’s scene graph or component model. I tried using MCP to empower it here: on a trivial test project it was fine. In a real project it got completely lost and broke everything after eating 30k tokens and 40 minutes of my time, mostly because it couldn’t understand the various (documented) patterns that straddled code files and scene structure.
For web and API development I give it an A, with just a little room for improvement. In this domain it’s really effective all the way down the logical stack from architectural and deployment decisions all the way down to implementation details and debugging including digging really deep in to package version incompatibilities and figuring out problems in seconds that would take me hours. My one criticism would be the - now familiar - “junior developer” effect where it’ll often run ahead with an over engineered lump of machinery without spotting a simpler more coherent pattern. As long as you keep an eye on it it’s fine.
So in summary: if what you’re doing is all in text, nothing in binary, doesn’t involve geometric or numerical reasoning, and has billions of lines of stack overflow solutions: you’ll be golden. Otherwise it’s still very hit and miss.