top | item 46610318

(no title)

Arch-TK | 1 month ago

"actually good enough to meet the goals?"

There's "okay for now" and then there's "this is so crap that if we set our bar this low we'll be knee deep in tech debt in a month".

A lot of LLM output in the specific areas _I_ work in is firmly in that latter category and many times just doesn't work.

discuss

order

gbnwl|1 month ago

So I can tell you don’t use these tools, or at least much, because at the speed of development with them you’ll be knee deep in tech debt in a day, not a month, but as a corollary can have the same agentic coding tools undergo the equivalent of weeks of addressing tech debt the next day. Well, I think this applies to greenfield AI-first oriented projects that work this way from the get go and with few humans in the loop (human to human communication definitely becomes the rate limiting step). But I imagine that’s not the nature of your work.

Arch-TK|1 month ago

Yes if I went hard on something greenfield I'm sure I'll be knee deep in tech debt in less than a day.

That being said, given the quality of code these things produce, I just don't see that ever stopping being the case. These things require a lot of supervision and at some point you are spending more time asking for revisions than just writing it yourself.

There's a world of difference between an MPV which, in the right domain, you can get done much faster now, and a finished product.

daveguy|1 month ago

I think you missed the your parent post's phrase "in the specific areas _I_ work in" ... LLMs are a lot better at crud and boilerplate than novel hardware interfaces and a bunch of other domains.

ambicapter|1 month ago

I mean, there's also, "this looks fine but if I actually had written this code I would've naturally spent more time on it which would have led me to anticipate the future of this code just a little bit more and I will only feel that awkwardness when I come back to this code in two weeks, and then we'll do it all over again". It's a spectrum.

trollbridge|1 month ago

Right.

And greenfield code is some of the most enjoyable to write, yet apparently we should let robots do the thing we enjoy the most, and reserve the most miserable tasks for humans, since the robots appear to be unable to do this.

I have yet to see an LLM or coding agent that can be prompted with "Please fix subtle bugs" or "Please retire this technical debt as described in issue #6712."