top | item 47201481

(no title)

keeda | 1 day ago

> The organizational assumption that reviewed code is understood code no longer holds.

This never held.

As somebody who has inherited codebases thrown over the wall through acquisitions and re-orgs, there is absolutely nothing in this article related to "code generated by AI" that cannot be attributed to "code generated by humans who are no longer at the company." Heck, these have happened when revisiting code I myself wrote years ago.

In a previous life 10 years ago, there was one large Python codebase I inherited from an acquisition, where a bug occurred due a method argument sometimes being passed in as a string or a number. Despite spending hours reproducing it multiple times to debug it, I could never figure out the code path that caused the bug. I suspect it was due to some dynamic magic where a function name was generated via concatenating disparate strings each of which were propagated via multiple asynchronous message queues (making the debugger useless), and then "eval"d. After multiple hours of trial and error and grepping, I could never find the offending callsite and the original authors had long moved on. My fix was just to put in a "x = int(x)" in the function and move on.

I would bet this was due to a shortcut somebody took under time pressure, something you can totally avoid simply by having the AI refactor everything instead.

We know what the solutions for that are, and they're the same -- in fact, they should be the default mode -- for AI-generated code. They are basically everything that we consider "best practices": avoiding magic, better types, comprehensive tests, documentation, modularity, and so on.

discuss

order

No comments yet.