top | item 47144635

(no title)

camgunz | 5 days ago

Unless this measures the entire SDLC longitudinally (like say, over a year) I'm not interested. I too can tell Claude Code to do things all day every day, but unless we have data on the defect rate it doesn't matter at all.

discuss

order

pgwhalen|5 days ago

I really am quite in awe of Claude Code recently, so definitely not a naysayer, but this is a really important point. It’s so easy to create code, but am I shipping that much to prod than I used to? A bit.

Obviously this highly depends on your company and your setup and risk tolerance and what not.

camgunz|4 days ago

I mean, Brooks' Mythical Man-Month says this explicitly: adding more programmers makes projects later because of coordination costs, which we haven't figured out (coordination isn't parallelization between agents, it's "oh we discovered this problem; we need to go back to design" and so on).

falcor84|5 days ago

Do any of those companies collect and share data on their defect rates to give you a baseline to compare against?

camgunz|4 days ago

That's my point. It's true codegen models generate code faster than humans do. Important remaining questions are:

* How do we scale up the other parts of the SDLC (planning, feasibility analysis, design, testing, deployment, maintenance)?

* What parts--if any--of the SDLC now take more or less time? Ex: we've seemingly cut down implementation time; does that come at the cost of maintenance, and if so is it still net worth it? Do we need to hire more designers, or do more user research?

The entire world is declaring "this is the future", but we don't even have simple data like "does this produce better code".