top | item 46434203

(no title)

mashlol | 2 months ago

AI almost always reduces the time from "I need to implement this feature" to "there is some code that implements this feature".

However in my experience, the issue with AI is the potential hidden cost down the road. We either have to:

1. Code review the AI generated code line by line to ensure it's exactly what you'd have produced yourself when it is generated or

2. Pay an unknown amount of tech tebt down the road when it inevitably wasn't what you'd have done yourself and it isn't extensible, scalable, well written code.

discuss

order

kentm|2 months ago

#2 is happening a lot more than people think. It’s incredibly hard to quantify tech debt in software and so as a result productivity measurements are pretty inaccurate. Even without AI there is a trend of devs writing a barely working system and then throwing it over the wall to “maintenance programmers”. Said devs are often rated highly by management as being productive compared to the “maintenance devs,” but all they really did was make other people deal with their garbage. I’ve seen these sorts of systems take months to years to be production ready while the original dev is already off to their new gig (and maybe cluelessly bragging on HN about how much better they are than the people cleaning up their mess).

To get an accurate productivity metric you’d have to somehow quantify the debt and “interest” vs some alternative. I don’t think that’s possible to do, so we’re probably just going to keep digging deeper.

jimbo808|2 months ago

RE 2: It's not that far down the road either. Laziliy reviewed or unreviewed LLM code rapidly turns your codebase into an absolute mess that LLMs can't maintain either. Very quickly you find yourself with lots of redundant code and duplicated logic, random unused code that's called by other unused code that gets called inside a branch that only tests will trigger, stuff like that. Eventually LLMs start fixing the code that isn't used and then confidently report that they solved the problem, filling up a context window with redundant nonsense every prompt, so they can't get anywhere. Yolo AI coding is like the payday loan of tech debt.

DougN7|2 months ago

This can happen sooner than you think too. I asked for what I thought was a simple feature and the AI wrote and rewrote a number of times trying to get it right, and eventually (not making this up) it told me the file was corrupt and could I please restore it from backup. This happened within about 20-30 minutes of asking for the change.

jennyholzer3|2 months ago

This is why I say LLMs are for idiots

brightball|2 months ago

Exactly. Optimizations in one area will simply move the bottleneck so in order to truly recognize gains you have to optimize the entire software pipeline.

nradov|2 months ago

Exactly right. It turns out that writing code is hardly ever the real bottleneck. People should spend some time learning the basics of queueing theory.

http://lpd2.com/

linsomniac|2 months ago

>Code review the AI generated code line by line

Have you considered having AI code review the AI code before giving them off to a human? I've been experimenting with having claude work on some code and commit it, and then having codex review the changes in the most recent git commit, then eyeballing the recommendations and either having codex work the changes, or giving them back to claude. That has seemed to be quite effective so far.

Maybe it's turtles all the way down?