(no title)
_dwt | 17 days ago
> If AI-generated code introduces defects at a higher rate, you need more review, not less AI.
I think that is very much up for debate despite being so frequently asserted without evidence! This strikes me as the same argument as we see about self-driving cars: they don't have to be perfect, because there is (or we can regulate that there must be) a human in the loop. However, we have research and (sometimes fatal) experience from other fields (aviation comes to mind) about "automation complacency" - the human mind just seems to resist thoroughly scrutinizing automation which is usually right.
allanmacgregor|17 days ago
Right now AI / Agentic coding doesn't seem is a train we are going to be able to stop; and at the end of the day is tool like any other. Most of what seems to be happening is people let AI fully take the wheel not enough specs, not enough testing, not enough direction.
I keep experiment and tweaking how much direction to give AI in order to product less fuckery and more productive code.
_dwt|17 days ago
I don't know how to encourage the kind of review that AI code generation seems to require. Historically we've been able to rely on the fact that (bluntly) programming is "g-loaded": smart programmers probably wrote better code, with clearer comments, formatted better, and documented better. Now, results that look great are a prompt away in each category, which breaks some subconscious indicators reviewers pick up on.
I also think that there is probably a sweet spot for automation that does one or two simple things and fails noisily outside the confidence zone (aviation metaphor: an autopilot that holds heading and barometric altitude and beeps loudly and shakes the stick when it can't maintain those conditions), and a sweet spot for "perfect" automation (aviation metaphor: uh, a drone that autonomously flies from point A to point B using GPS, radar, LIDAR, etc...?). In between I'm afraid there be dragons.