(no title)
SCdF | 1 month ago
When you can get multiple different agents to all work on things and you are bouncing between them, careful review of their code becomes the bottleneck. So you start lowering your bar to "good enough", where "good enough" is not really good enough. It's a new good enough, which is like you squinting at the code and as long as the shape is vaguely ok, and the code works (where that means you click around a bit and it seems fine), it's ok.
Over time you lose your "theory"[1] of the software, and I would imagine that makes you effectively lower your bar even further, because you are less attached to what good should look like.
This is all anecdotal on my end, but it does feel like quality as a whole in the industry has tanked in the last maybe 12 months? It feels like there are more outages than normal. I couldn't find a good temporal outage graph, but if you trust this: https://www.catchpoint.com/internet-outages-timeline , the number of outages in 2025 is orders of magnitude up on 2024.
Maybe this is because there are way more, maybe this is because they are now tracking way more, I'm not sure. But it definitely _feels_ like we are in for a bumpy ride over the next few years.
[1] in the Programming as Theory Building sense: https://gareth.nz/ai-programming-as-theory-building.html
throwaheyy|1 month ago
arresin|1 month ago
lazide|1 month ago
rr808|1 month ago