(no title)
nemothekid | 22 days ago
There is a real advantage to having good code especially when using agents. "Good Code" makes iteration faster, the agent is unlikely to make mistakes and will continue to produce extensible code that can easily be debugged (by both you and the agent).
A couple months ago I refactored a module that had gotten unweildly, and I tried to test if Claude could add new features on the old code. Opus 4.5 just could not add the feature in the legacy module (which was a monster function that just got feature-crept), but was able to completely one shot it after the refactor.
So there is clear value in having "clean code", but I'm not sure how valuable it is. If even AGI cannot handle tech debt, then there's value is at least building scaffolding (or atleast prompting the scaffolding first). On the other hand there may be a future where the human doesn't concern himself with "clean code" at all: if the value of "clean code" only saves 5 minutes to a sufficiently advanced agent, the scaffolding work is usefuless.
My reference is assembly - I'm in my early 30s and I have never once cared about "clean" assembly. I have cared about the ASM of specific hot functions I have had to optimize, but I've never learned what is proper architecture for assembly programs.
perrygeo|21 days ago
I have a codebase where variables are named poorly - nah that's too generous, variable names are insane; inconsistent even within a single file and often outright wrong and misleading. No surprise - the LLMs choke and fail to produce viable changsets. Bad pattern = bad code generated from that pattern.
Going through and clarifying the naming (not ever refactoring) was enough to establish the pattern correctly. A little pedantry and the LLM was off to the races.
If LLMs are the future of coding, the number one highest priority for the software industry should be to fix muddled naming, bad patterns, and obfuscated code. My bet is that building clean code foundations is the fastest way to fully agentic coding.
amitprasad|22 days ago
IMO we shouldn't strive to make an entire codebase pristine, but building anything on shaky foundations is a recipe for disaster.
Perhaps the frontier models of 2026H2 may be good enough to start compacting and cleaning up entire codebases, but with the trajectory of how frontier labs suggest workflows for coding agents, combined with increasing context window capabilities, I don't see this being a priority or a design goal.
nemothekid|21 days ago
I don't think this will happen - or rather I don't think you can ask someone, human or machine, to come in and "compact and clean" your codebase. What is "clean" code depends on your assumptions, constraints, and a guess about what the future will require.
Modularity where none is required becomes boilerplate. Over-rigidity becomes spaghetti codes and "hacks". Deciding what should be modular and what should be constant requires some imagination about what the future might bring and that requires planning.
IhateAI_3|22 days ago
[deleted]
MattGaiser|22 days ago
A vast number of things. There are a vast number of things I will accept having done in even mediocre quality, as in the old pre-AI world, I would never get to them at all.
Every friend with a startup idea. Every repetitive form I have to fill out every month for compliance. Just tooling for my day to day life.
charcircuit|22 days ago
Since most work on software projects is going to be done via coding, debugging, QA, etc AI agents you should prioritize finding ways to increase the velocity of these AI agents to maximize the velocity of the project.
>Are you that bad at it?
That is irrelevant.
>Is there anything you really have to get done regardless of quality right this second?
You are implying that AI agents have low quality work, but that is not the case. Being able to save time for an equivalent result is a good thing.
>Just write the code yourself, and stop training your replacement.
AI labs are the ones doing the training better AI.
aurareturn|21 days ago