top | item 47388646

Ask HN: How is AI-assisted coding going for you professionally?

434 points| svara | 18 days ago | reply

Comment sections on AI threads tend to split into "we're all cooked" and "AI is useless." I'd like to cut through the noise and learn what's actually working and what isn't, from concrete experience.

If you've recently used AI tools for professional coding work, tell us about it.

What tools did you use? What worked well and why? What challenges did you hit, and how (if at all) did you solve them?

Please share enough context (stack, project type, team size, experience level) for others to learn from your experience.

The goal is to build a grounded picture of where AI-assisted development actually stands in March 2026, without the hot air.

617 comments

order
[+] viccis|18 days ago|reply
Haven't seen this mentioned yet, but the worst part for me is that a lot of management LOVES to use Claude to generate 50 page design documents, PRDs, etc., and send them to us to "please review as soon as you can". Nobody reads it, not even the people making it. I'm watching some employees just generate endless slide decks of nonsense and then waffle when asked any specific questions. If any of that is read, it is by other peoples' Claude.

It has also enabled a few people to write code or plan out implementation details who haven't done so in a long (sometimes decade or more) time, and so I'm getting some bizarre suggestions.

Otherwise, it really does depend on what kind of code. I hand write prod code, and the only thing that AI can do is review it and point out bugs to me. But for other things, like a throwaway script to generate a bunch of data for load testing? Sure, why not.

[+] hdhdhsjsbdh|18 days ago|reply
It has made my job an awful slog, and my personal projects move faster.

At work, the devs up the chain now do everything with AI – not just coding – then task me with cleaning it up. It is painful and time consuming, the code base is a mess. In one case I had to merge a feature from one team into the main code base, but the feature was AI coded so it did not obey the API design of the main project. It also included a ton of stuff you don’t need in the first pass - a ton of error checking and hand-rolled parsing, etc, that I had to spend over a week unrolling so that I could trim it down and redesign it to work in the main codebase. It was a slog, and it also made me look bad because it took me forever compared to the team who originally churned it out almost instantly. AI tools are not good at this kind of design deconflicting task, so while it’s easy to get the initial concept out the gate almost instantly, you can’t just magically fit it into the bigger codebase without facing the technical debt you’ve generated.

In my personal projects, I get to experience a bit of the fun I think others are having. You can very quickly build out new features, explore new ideas, etc. You have to be thoughtful about the design because the codebase can get messy and hard to build on. Often I design the APIs and then have Claude critique them and implement them.

I think the future is bleak for people in my spot professionally – not junior, but also not leading the team. I think the middle will be hollowed out and replaced with principals who set direction, coordinate, and execute. A privileged few will be hired and developed to become leaders eventually (or strike gold with their own projects), but everyone in between is in trouble.

[+] ramraj07|18 days ago|reply
If you dont take a stand and refuse to clean their mess, aren't you part of the problem? No self respecting proponent of AI enabled development should suggest that the engineers generating the code are still not personally responsible for its quality.
[+] theshrike79|18 days ago|reply
Just reply with this to every AI programming task: https://simonwillison.net/2025/Dec/18/code-proven-to-work/

It's just plain unprofessional to just YOLO shit with AI and force actual humans to read to code even if the "author" hasn't read it.

Also API design etc. should be automatically checked by tooling and CI builds, and thus PR merges, should be denied until the checks pass.

[+] dude250711|18 days ago|reply
> It was a slog, and it also made me look bad because it took me forever compared to the team who originally churned it out almost instantly.

The hell you are playing hero for? Delegate the choice to manager: ruin the codebase or allocate two weeks for clean-up - their choice. If the magical AI team claim they can do integration faster - let them.

[+] phyzix5761|18 days ago|reply
> did not obey the API design of the main project

If they're handing you broken code call them out on it. Say this doesn't do what it says it does, did you want me to create a story for redoing all this work?

[+] AnimalMuppet|18 days ago|reply
I've heard of human engineers who are like that. "10x", but it doesn't actually work with the environment it needs to work in. But they sure got it to "feature complete" fast. The problem is, that's a long way from "actually done".
[+] suzzer99|18 days ago|reply
> At work, the devs up the chain now do everything with AI – not just coding – then task me with cleaning it up.

This has to be the most thankless job for the near future. It's hard and you get about as much credit as the worker who cleans up the job site after the contractors are done, even though you're actually fixing structural defects.

And god forbid you introduce a regression bug cleaning up some horrible redundant spaghetti code.

[+] ehnto|18 days ago|reply
Thst is definitely one tell, the hand rolled input parsing or error handling that people would never have done at their own discretion. The bigger issue is that we already do the error checking and parsing at the different points of abstraction where it makes the most sense. So it's bespoke, and redundant.

That is on the people using the AI and not cleaning up/thinking about it at all.

[+] dawnerd|18 days ago|reply
We’ve had this too and made a change to our code review guidelines to mention rejection if code is clearly just ai slop. We’ve let like four contractors go so far over it. Like ya they get work done fast but then when it comes to making it production ready they’re completely incapable. Last time we just merged it anyways to hit a budget it set everyone back and we’re still cleaning up the mess.
[+] visarga|18 days ago|reply
I think you need coding style guide files in each repo, including preferred patterns & code examples. Then you will see less and less of that.
[+] jf22|16 days ago|reply
All that cleanup can be done with AI and then you can add guidance to the codebase so that the next LLM will do a better job at conforming.
[+] fastasucan|18 days ago|reply
It makes my work suck, sadly. Team dynamics also contributes to that, admittedly.

Last year I was working on implementing a pretty big feature in our codebase, it required a lot of focus to get the business logic right and at the same time you had be very creative to make this feasible to run without hogging to much resources.

When I was nearly done and worked on catching bugs, team members grew tired of waiting and starting taking my code from x weeks ago (I have no idea why), feeding it to Claude or whatever and then came back with a solution. So instead of me finishing my code I had to go through their version of my code.

Each one of the proposals had one or more business requirements wrong and several huge bugs. Not one was any closer to a solution than mine was.

I had appreciated any contribution to my code, but thinking that it would be so easy to just take my code and finishing it by asking Claude was rather insulting.

[+] Izkata|18 days ago|reply
I don't use it.

I know my mind fairly well, and I know my style of laziness will result in atrophying skills. Better not to risk it.

One of my co-workers already admitted as much to me around six months ago, and that he was trying not to use AI for any code generation anymore, but it was really difficult to stop because it was so easy to reach for. Sounded kind of like a drug addiction to me. And I had the impression he only felt comfortable admitting it to me because I don't make it a secret that I don't use it.

Another co-worker did stop using it to generate code because (if I'm remembering right) he can tell what it generates is messy for long-term maintenance, even if it does work and even though he's new to React. He still uses it often for asking questions.

A third (this one a junior) seemed to get dumber over the past year, opening merge request that didn't solve the problem. In a couple of these cases my manager mentioned either seeing him use AI while they were pairing (and it looked good enough so the problems just slipped by) or saw hints in the merge request with how AI names or structures the code.

[+] Xcelerate|18 days ago|reply
I've been using ChatGPT to teach myself all sorts of interesting fields of mathematics that I've wanted to learn but never had the time previously. I use the Pro version to pull up as many actual literature references as I can.

I don't use it at all to program despite that being my day job for exactly the reason you mentioned. I know I'll totally forget how to program. During a tight crunch period, I might use it as a quick API reference, but certainly not to generate any code. (Absolutely not saying it's not useful for this purpose—I just know myself well enough to know how this is going to go haha)

[+] tim-tday|18 days ago|reply
I’m the same way. But I took a bite and now I’m hooked.

I started using it for things I hate, ended up using it everywhere. I move 5x faster. I follow along most of the time. Twice a week I realize I’ve lost the thread. Once a month it sets me back a week or more.

[+] tehjoker|18 days ago|reply
I use AI to discuss and possibly generate ideas and tests, but I make sure I understand everything and type it in except for trivial stuff. The main value of an engineer is understanding things. AI can help me understand things better and faster. If I just setup plans for AI and vibe, human capital is neglected and declines. I don't think there's much of a future if you don't know what you're doing, but there is always a future for people with deep understanding of problems and systems.
[+] philipp-gayret|18 days ago|reply
The atrophy of manually writing code is certainly real. I'd compare it to using a paper map and a compass to navigate, versus say Google Maps. I don't particularly care to lose the skill, even though being good and enjoying the programming part of making software was my main source of income for more than a decade. I just can't escape being significantly faster with a Claude Code.

> he can tell what it generates is messy for long-term maintenance, even if it does work and even though he's new to React.

When one can generate code in such a short amount of time, logically it is not hard to maintain. You could just re-generate it if you didn't like it. I don't believe this style of argument where it's easy to generate with AI but then you cannot maintain it after. It does not hold up logically, and I have yet to see such a codebase where AI was able to generate it, but now cannot maintain it. What I have seen this year is feature-complete language and framework rewrites done by AI with these new tools. For me the unmaintainable code claim is difficult to believe.

[+] onlyrealcuzzo|18 days ago|reply
I work at a FAANG.

Professionally, I have had almost no luck with it, outside of summarizing design docs or literally just finding something in the code that a simple search might not find: such is this team's code that does X?

I am yet to successfully prompt it and get a working commit.

Further, I will add that I also don't know any ICs personally who have successfully used it. Though, there's endless posts of people talking about how they're now 10x more productive, and everyone needs to do x y an z now. I just don't know any of these people.

Non-professionally, it's amazing how well it does on a small greenfield task, and I have seen that 10x improvement in velocity. But, at work, close to 0 so far.

Of the posts I've seen at work, they typically tend to be teams doing something new / greenfield-ish or a refactor. So I'm not surprised by their results.

[+] tim-tday|18 days ago|reply
This is wild. I’m on the other end.

I’ve probably prompted 10,000 lines of working code in the last two months. I started with terraform which I know backwards and forwards. Works perfectly 95% of the time and I know where it will go wrong so I watch for that. (Working both green field, in other existing repos and with other collaborators)

Moved on to a big data processing project, works great, needed a senior engineer to diagnose one small index problem which he identified in 30s. (But I’d bonked on for a week because in some cases I just don’t know what I don’t know)

Meanwhile a colleague wanted a sample of the data. Vibe coded that. (Extract from zip without decompressing) He wanted randomized. One shot. Done. Then he wanted randomized across 5 categories. Then he wanted 10x the sample size. Data request completed before the conversion was over. I would have worked on that for three hours before and bonked if I hit the limit of my technical knowledge.

Built a monitoring stack. Configured servers, used it to troubleshoot dozens of problems.

For stuff I can’t do, now I can do. For stuff I could do with difficulty now I can do with ease. For stuff I could do easily now I can do fast and easy.

Your vastly different experience is baffling and alien to me. (So thank you for opening my eyes)

[+] humbleharbinger|18 days ago|reply
I'm an engineer at Amazon - we use Kiro (our own harness) with Opus 4.6 underneath.

Most of my gripes are with the harness, CC is way better.

In terms of productivity I'm def 2-4X more productive at work, >10x more productive on my side business. I used to work overtime to deliver my features. Now I work 9-5 and am job hunting on the side while delivering relatively more features.

I think a lot of people are missing that AI is not just good for writing code. It's good for data analysis and all sorts of other tasks like debugging and deploying. I regularly use it to manage deployment loops (ex. make a code change and then deploy the changes to gamma and verify they work by making a sample request and verifying output from cloudwatch logs etc). I have built features in 2 weeks that would take me a month just because I'd have to learn some nitty technical details that I'd never use again in my life.

For data analysis I have an internal glue catalog, I can just tell it to query data and write a script that analyzes X for me.

AI and agents particularly have been a huge boon for me. I'm really scared about automation but also it doesn't make sense to me that SWE would be automated first before other careers since SWE itself is necessary to automate others. I think there are some fundamental limitations on LLMs (without understanding the details too much), but whatever level of intelligence we've currently unlocked is fundamentally going to change the world and is already changing how SWE looks.

[+] jgilias|18 days ago|reply
I saw somewhere that you guys had All Hands where juniors were prohibited from pushing AI-assisted code due to some reliability thing going on? Was that just a hoax?
[+] hn_throwaway_99|18 days ago|reply
> I have built features in 2 weeks that would take me a month just because I'd have to learn some nitty technical details that I'd never use again in my life.

In the bucket of "really great things I love about AI", that would definitely be at the top. So often in my software engineering career I'd have to spend tons of time learning and understanding some new technology, some new language, some esoteric library, some cobbled-together build harness, etc., and I always found it pretty discouraging when I knew that I'd never have reason to use that tech outside the particular codebase I was working on at that time. And far from being rare, I found that working in a fairly large company that that was a pretty frequent occurrence. E.g. I'd look at a design doc or feature request and think to myself "oh, that's pretty easy and straightforward", only to go into the codebase and see the original developer/team decided on some extremely niche transaction handling library or whatever (or worse, homegrown with no tests...), and trying to figure out that esoteric tech turned into 85% of the actual work. AI doesn't reduce that to 0, but I've found it has been a huge boon to understanding new tech and especially for getting my dev environment and build set up well, much faster than I could do manually.

Of course, AI makes it a lot easier to generate exponentially more poorly architected slop, so not sure if in a year or two from now I'll just be ever more dependent on AI explaining to me the mountains of AI slop created in the first place.

[+] clintonb|18 days ago|reply
> make a code change and then deploy the changes to gamma and verify they work by making a sample request and verifying output from cloudwatch logs etc

This has been a godsend over the past week while deploying a couple services. One is a bridge between Linear and our Coder.com installation so folks can assign the work to an agent. Claude Code can do most of the work while I sleep since it has access to kubectl, Linear MCP, and Coder MCP. I no longer have to manually build, deploy, test, repeat. It just does it all for me!

[+] simonreiff|18 days ago|reply
Mind my asking why job hunting and what you wish you could do in your day job that you're not?
[+] 3form|17 days ago|reply
How do you deal with a risk of LLM generating malicious code and then running it? I suspect it's a bit more difficult to set it up tailorer to your needs in a big corp.
[+] gerdesj|18 days ago|reply
"I'm an engineer at Amazon"

Sanctioned comment?

[+] wk_end|18 days ago|reply
Around a year ago I started a new position at a very large tech company that I won't name, working on a pre-existing web project there. The code base isn't terrible - though not very good either, by-and-large - but it's absolutely massive, often over-engineered, pretty unorthodox, and definitely has some questionable design decisions; even after more than a year of working with it I still feel like a beginner much of the time.

This year I grudgingly bit the bullet and began using AI tools, and to my dismay they've been a pretty big boon for me, in this case. Not just for code generation - they're really good at probing the monolith and answering questions I have about how it works. Before I'd spend days pouring over code before starting work to figure out the right way to build something or where to break in, pinging people over in India or eastern Europe with questions and hoping they reply to me overnight. AI's totally replaced that, and it works shockingly well.

When I do fall back on it for code generation, it's mostly just to mitigate the tedium of writing boilerplate. The code it produces tends to be pretty poor - both in terms of style and robustness - and I'll usually need to take at least a couple of passes over it to get it up to snuff. I do find this faster than writing everything out by hand in the end, but not by a lot.

For my personal projects I don't find it adds much, but I do enjoy rubber ducking with ChatGPT.

[+] simonw|18 days ago|reply
The majority of code I've written since November 2025 has been created using agents, as opposed to me typing code into a text editor. More than half of that has been done from my iPhone via Claude Code for web (bad name, great software.)

I'm enjoying myself so much. Projects I've been thinking about for years are now a couple of hours of hacking around. I'm readjusting my mental model of what's possible as a single developer. And I'm finally learning Go!

The biggest challenge right now is keeping up with the review workload. For low stakes projects (small single-purpose HTML+JS tools for example) I'm comfortable not reviewing the code, but if it's software I plan to have other people use I'm not willing to take that risk. I have a stack of neat prototypes and maybe-production-quality features that I can't ship yet because I've not done that review work.

I mainly work as an individual or with one other person - I'm not working as part of a larger team.

[+] QuadrupleA|18 days ago|reply
As a veteran freelance developer - aside from some occasional big wins, I'd say it's been net neutral or even net negative to my productivity. When I review AI-generated code carefully (and if I'm delivering it to clients I feel that's my responsibility) I always find unnecessary complexity, conceptual errors, performance issues, looming maintainability problems, etc. If I were to let it run free, these would just compound.

A couple "win" examples: add in-text links to every term in this paragraph that appears elsewhere on the page, plus corresponding anchors in the relevant page parts. Or, replace any static text on this page with any corresponding dynamic elements from this reference URL.

Lose examples: constant, but edit format glitches (not matching searched text; even the venerable Opus 4.6 constantly screws this up), unnecessary intermediate variables, ridiculously over-cautious exception-handling, failing to see opportunities to isolate repeated code into a function, or to utilize an existing function that exactly implements said N lines of code, etc.

[+] slurpyb|18 days ago|reply
It can only result in more work if you freelance because it you disclose that you used llm’s then you did it faster than usual and presumably less quality so you have to deliver more to retain the same income except now your paying all the providers for all the models because you start hitting usage limits and claude sucks on the weekends and your drive is full of ‘artifacts’, which incurs mental overhead that is exacerbated by your crippling adhd

And then all of a sudden you’re just arguing with the terminal all day - the specs are written by gpt, delivered in-the email written by gpt. Sometimes they dont even have the time to slice their prompt from the edges of the paste but the only thing i can think of is “i need to make the most of 0.5x off peak claude rates “

Fuck.

I got lots of pretty TUIs though so thats neat

[+] vemv|18 days ago|reply
Have you perceived a market shift for freelancers given the rise of AI coding?

It seems to me that sadly, paying for getting a few isolated tasks done is becoming a thing of the past.

[+] lazy_afternoons|18 days ago|reply
Surprised to see HN being bearish on this.

I have 10 years of experience. I am a reasonable engineer. I can tell you that about half of the hype on twitter is real. It is a real blessing for small teams.

We have 100k DAU for a consumer crud app. We built and maintain everything in-house with 3 engineers. This would have taken atleast 10 engineers 3-4 years back.

We don't have a bug list. We are not "vibe coding" , 2 of us understand almost all of the codebase. We have processes to make sure the core integrity of codebase doesn't go for a toss.

None has touched the editor in months.

Even the product folks can raise a PR for small config changes from slack.

Velocity is through the roof and code quality is as good if not better than when we write by hand.

We refactor almost A LOT more than before because we can afford to.

I love it.

[+] greenpizza13|18 days ago|reply
I work at a very prominent AI company. We have access to every tool under the sun. There are various levels of success for all levels — managers, PMs, engineers.

We have cursor with essentially unlimited Opus 4.6 and it’s fundamentally changed my workflow as a senior engineer. I find I spend much more time designing and testing my software and development time is almost entirely prompting and reviewing AI changes.

I’m afraid my coding skills are atrophying, in fact I know the are, but I’m not sure if the coding was the part of my job I truly enjoyed. I enjoy thinking higher-level: architecture, connecting components, focusing on the user experience. But I think using these AI tools is a form of golden handcuffs. If I go work at a startup without the money I pay for these models, I think for the first time in my career I would be less likely to be able to successfully code a feature than I could last year.

So professionally there are pros and cons. My design and architecture skills have greatly improved as I am spending more time doing this.

Personally it’s so much fun. I’ve made several side projects I would have never done otherwise. Working with Claude code on greenfield projects is a blast.

[+] kreyenborgi|18 days ago|reply
Net negative. I do find it genuinely useful for code review, and "better search engine" or snippets, and sometimes for rubber ducking, but for agent mode and actual longer coding tasks I always end up rewriting the code it makes. Whatever it produces always looks like one of those students who constantly slightly misunderstands and only cares about minor test objectives, never seeing the big picture. And I waste so much time on the hope that this time it will make me more productive if only I can nudge it in the right direction, maybe I'm not holding it right, using the right tools/processes/skills etc. It feels like javascript frameworks all over again.
[+] christophilus|18 days ago|reply
Same. I vacillate between thinking our profession will soon be over to thinking we’re perfectly safe. Sometimes, it’s brilliant. It is very good at exploring and explaining a codebase, finding bugs, and doing surgical fixes. It’s sometimes good at programming larger tasks, but only if you really don’t care about code quality.

The one thing I’m not sure about is: does code quality and consistency actually matter? If your architecture is sufficiently modular, you can quickly and inexpensively regenerate any modules whose low quality proves to be problematic.

So, maybe we really are fucked. I don’t know.

[+] Brystephor|18 days ago|reply
Im at a public, well known tech company.

We got broad and wide access to AI tools maybe a month ago now. AI tools meaning claude code, codex, cursor and a set of other random AI tools.

I use them very often. They've taken a lot of the fun and relaxing parts of my job away and have overall increased my stress. I am on the product side of the business and it feels necessary for me to have 10 new ideas and now the ones with the most ideas will be rewarded, which I am not as good at. Ive tried having the agents identify opportunities for infra improvements and had no good luck there. I haven't tried it for product suggestions but I think it would be poor at that too.

I get sent huge PRs and huge docs now that I wasnt sent before with pressure to accept them as is.

I write code much faster but commit it at the same pace due to reviews taking so long. I still generate single task PRs to keep them reviewable and do my own thorough review before hand. I always have an idea in ny head about how it should work before getting started, and I push the agent to use my approach. The AI tools are good at catching small bugs, like mutating things across threads. I like to use it to generate plans for implementation (that only I and the bots read, I still handwrite docs that are broadly shared and referenced).

Overall, AI has me nervous. Primarily because it does the parts that I like very well and has me spending a higher portion of my job on the things I dont like or find more tiresome.

[+] michaelteter|17 days ago|reply
It is phenomenal.

I have a lot of experience, low and high level. These AI tools allow me to "discuss" possibilities, research approaches, and test theories orders of magnitude faster than I could in the past.

I would roughly estimate that my ability to produce useful products is at least 20x. A good bit of that 'x' is because of the elimination of mental barriers. There have always been good ideas I had which I knew could work, but I also knew that to prove that they could work would take a lot of focus and research (leveling up on specific things). And that takes human energy - while I'm busy also trying to do good things in my day job.

Now I have immensely powerful minions and research assistants. I can test any theory I have in an hour or less.

While these minions are being subsidized in the wonderful VC way, I can get a lot of done. If the real costs start to bleed through, I'll have to scale back my explorations. (Because at a point, I'll have to justify testing my theories against spending 2-300$.)

To your questions, I'm usually a solo builder anyway. I've built serious things for serious companies, but almost always solo. So that's quite a burden. And now I'm weary of all that corporate stuff, so I build for myself. And what a joy it is, having these powertools.

If I were in a company right now, I could absolutely replace a team of 5 people with me + AI... assuming the CTO wasn't the (usual) limiting factor.

[+] notatoad|18 days ago|reply
It’s completely inconsistent for me, and any time I start to think it is amazing, I quickly am proven wrong. It definitely has done some useful things for me, but as it stands any sort of “one shot” or vibecoding where I expect the ai to complete a whole task autonomously is still a long ways off.

Copilot completions are amazingly useful. chatting with the chatbot is a super useful debugging tool. Giving it a function or database query and asking the ai to optimize it works great. But true vibe coding is still, imho, more of a party trick than an actual productivity multiplier. It can do things that look useful, and it can do things that solve immediate self-contained problems. but it can’t create launchable products that serve the needs of multiple users.

[+] wg0|18 days ago|reply
I foresee that the AI blindness at CEO/CFO level and the general hype (from technical and non technical press and media) in our society that software engineering is over etc will result in severe talent shortage in 5-7 years resulting in bidding wars for talent driving salaries 3x upwards or more.
[+] ares623|18 days ago|reply
Then we'll be back to 2019/2020 cycle and round and round the merry go round we go
[+] VoidWhisperer|18 days ago|reply
It has definitely made me more productive. That said, that productivity isn't coming from using it to write business logic (I prefer to have an in-depth understanding of the logical parts of the codebases that I'm working on. I've also seen cases in my work codebases where code was obviously AI generated before and ends up with gaping security or compliance issues that no one seemed to see at the time).

The productivity comes from three main areas for me:

- Having the AI coding assistance write unit tests for my changes. This used to be by far my least favorite part of my job of writing software, mostly because instead of solving problems, it was the monotonous process of gathering mock data to generate specific pathways, trying to make sure I'm covering all the cases, and then debugging the tests. AI coding assistance allows me to just have to review the tests to make sure that they cover all the cases I can think of and that there aren't any overtly wrong assumptions

- Research. It has been extraordinarily helpful in giving me insight into how to design some larger systems when I have extremely specific requirements but don't necessarily have the complete experience to architect them myself - I know enough to understand if the system is going to correctly accomplish the requirements, but not to have necessarily come up with architecture as a whole

- Quick test scripts. It has been extremely useful for generating quick SQL data for testing things, along with quick one-off scripts to test things like external provider APIs

[+] ivraatiems|18 days ago|reply
> Research. It has been extraordinarily helpful in giving me insight into how to design some larger systems when I have extremely specific requirements but don't necessarily have the complete experience to architect them myself - I know enough to understand if the system is going to correctly accomplish the requirements, but not to have necessarily come up with architecture as a whole.

I agree, this is where coding agents really shine for me. Even if they get the details wrong, they often pinpoint where things happen and how quite well.

They're also great for rapid debugging, or assisted bug fixing. Often, I will manually debug a problem, then tell the AI, "This exception occurs in place Y because thing X is happening, here's a stack trace, propose a fix", and then it will do the work of figuring out where to put the fix for me. I already usually know WHAT to do, it's just a matter of WHERE in context. Saves a lot of time.

Likewise, if I have something where I want thing X to do Y, and X already does Z, then I'll say, "Implement a Y that works like Z but for A B C", and it'll usually get it really close on the first try.

[+] adelie|18 days ago|reply
i'm a senior engineer at a mid-size, publicly traded company.

my team has largely avoided AI; our sister team has been quite gungho on it. i recently handed off a project to them that i'd scoped at about one sprint of work. they returned with a project design that involved four microservices, five new database tables, and an entirely new orchestration and observability layer. it took almost a week of back-and-forth to pare things down.

since then, they've spent several sprints delivering PRs that i now have to review. there's lots of things that don't work, don't make sense, or reinvent things we already have from scratch. almost half the code is dedicated to creating 'reusable' and 'modular' classes (read: boilerplate) for a project that was distinctly scoped as a one-off. as a result, this takes hours, and it's cut into my own sprint velocity. i'm doing all the hard work but receiving none of the credit.

management just told me that every engineer is now required to use AI. i'm tired.

[+] drrob|18 days ago|reply
I've only recently begun using copilot auto-complete in Visual Studio using Claude (doing C# development/maintenance of three SaaS products). I've been a coder since 1999.

The suggestions are correct about 40% of the time, so I'm actually surprised when they're right, rather than becoming reliant on them. It saves me maybe 10 minutes a day.

[+] scuff3d|18 days ago|reply
The only part AI auto complete I found I really like is when I have a function call that takes like a dozen arguments, and the auto complete can just shove it all together for me. Such a nice little improvement.
[+] robbbbbbbbbbbb|18 days ago|reply
Context: micro (5 person) software company with a mature SaaS product codebase.

We use a mix of agentic and conversational tools, just pick your own and go with it.

For Unity development (our main codebase and source of value) I give current gen tools a C- for effectiveness. For solving confined, well modularisable problems (eg refactor this texture loader; implement support for this material extension) it’s good. For most real day to day problems it’s hopelessly confused by the large codebase full of state, external dependency on chunks of Unity, implicit hardware-dependent behaviours, etc. It has no idea how to work meaningfully with Unity’s scene graph or component model. I tried using MCP to empower it here: on a trivial test project it was fine. In a real project it got completely lost and broke everything after eating 30k tokens and 40 minutes of my time, mostly because it couldn’t understand the various (documented) patterns that straddled code files and scene structure.

For web and API development I give it an A, with just a little room for improvement. In this domain it’s really effective all the way down the logical stack from architectural and deployment decisions all the way down to implementation details and debugging including digging really deep in to package version incompatibilities and figuring out problems in seconds that would take me hours. My one criticism would be the - now familiar - “junior developer” effect where it’ll often run ahead with an over engineered lump of machinery without spotting a simpler more coherent pattern. As long as you keep an eye on it it’s fine.

So in summary: if what you’re doing is all in text, nothing in binary, doesn’t involve geometric or numerical reasoning, and has billions of lines of stack overflow solutions: you’ll be golden. Otherwise it’s still very hit and miss.