top | item 44967435

(no title)

d_watt | 6 months ago

It's potentially the opposite. If you instrument a codebase with documentation and configuration for AI agents to work well in it, then in a year, that agent will be able to do that same work just as well (or better with model progress) at adding new features.

This assumes your adding documentation, tests, instructions, and other scaffolding along the way, of course.

discuss

order

bigiain|6 months ago

I wonder how soon (or if it's already happening) that AI coding tools will behave like early career developers who claim all the existing code written by others is crap and go on to convince management that a ground up rewrite is required.

(And now I'm wondering how soon the standard AI-first response to bug reports will be a complete rewrite by AI using the previous prompts plus the new bug report? Are people already working on CI/CD systems that replace the CI part with whole-project AI rewrites?)

kmoser|6 months ago

As the cost of AI-generated code approaches zero (both in time and money), I see nothing wrong with letting the AI agent spin up a dev environment and take its best shot. If it can prove with rigorous testing that the new code works is at least as reliable as the old code, and is written better, then it's a win/win. If not, delete that agent and move on.

On the other hand, if the agent is just as capable of fixing bugs in legacy code as rewriting it, and humans are no longer in the loop, who cares if it's legacy code?

AntwaneB|6 months ago

Author here, you're right, but by definition when you do all of this the Bus Factor has already increased:

> This assumes your adding documentation, tests, instructions, and other scaffolding along the way, of course.

It's not just about knowledge in someone's brain, just about knowledge persistence.