(no title)
tabs_or_spaces | 6 days ago
> ...
> Writing good code remains significantly more expensive
I think this is a bad argument. Code was expensive because you were trying to write the expensive good code in the first place.
When you drop your standards, then writing generated code is quick, easy and cheap. Unless you're willing to change your standard, getting it back to "good code" is still an equivalent effort.
There are alternative ways to define the argument for agentic coding, this is just a really really bad argument to kick it off.
bfbf|5 days ago
Last month I did the majority of my work through an agent, and while I did review its work, I’m now finding edge cases and bugs of the kind that I’d never have expected a human to introduce. Obviously it’s on me to better review its output, but the perceived gains of just throwing a quick bug ticket at the ai quickly disappear when you want to have a scalable project.
heisenbit|5 days ago
zozbot234|5 days ago
cpursley|5 days ago
vips7L|5 days ago
Even my seniors are just copy pasting out whatever Claude says. People are naturally lazy, even if they know what they’re doing they don’t want to expend the effort.
frgturpwd|5 days ago
simonw|5 days ago
I chose this words because I don't think good code is nearly as expensive with coding agents as it was without them.
You still have to actively work to get good code, but it takes so much less time when you have a coding agent who can do the fine-grained edits on your behalf.
I firmly believe that agentic engineering should produce better code. If you are moving faster but getting worse results it's worth stopping and examining if there are processes you could fix.
akiselev|5 days ago
I’m using a combination of 100s of megabytes of Ghidra decompiled delphi DLLs and millions of lines of decompiled C# code to do this reverse engineering. I can’t imagine even trying such a large project for LLMs so while a good implementation is still taking a lot of time, it’s definitely a lot cheaper than before.
[1] I saw your red/green TDD article/book chapter and I don’t think you go far enough. Since we have agents, you can generalize red/green development to a lot of things that would be impractical to implement in tests. For example I have agents analyze binary diffs of the file format to figure out where my implementation is incorrect without being bogged down by irrelevant details like the order or encoding of parameters. This guides the agent loop instead of tests.
inejge|5 days ago
Which is nuance that will get overlooked or waved away by upper management who see the cost of hiring developers, know that developers "write code", and can compare the developer salary with a Claude/Codex/whatever subscription. If the correction comes, it will be late and at the expense of rank and file, as usual. (And don't be naive: if an LLM subscription can let you employ fewer developers, that subscription plus offshore developers will enable even more cost saving. The name of the game is cost saving, and has been for a long time.)
mexicocitinluez|5 days ago
Still navigating this territory, but I think a lot of people are getting caught up on the idea that producing code is simply a matter of typing it at the keyboard.
One of the benefits of something like Claude Code isn't just the code it produces, but the ability to quickly try out ideas, get some feedback, AND THEN write the good code.
> than the more aesthetically pleasing
Agreed. What even is "good" code? So much of the bad code I write isn't necessarily that it's ugly, it's bad because it misses the mark. Because I made too many assumptions and didn't take the time to actual learn the domain. If I can eek out even a few more hours a week to actually build worthwhile solutions because I was able to focus a bit more, it's a win to me. My users in particular have a really difficult time imagining features without actually seeing them. They have a hard to articulating what's wrong/right without something tangible in front of them. It would be hard to argue that having the ability to quickly prototype and demo features to people is a bad thing.
kranner|5 days ago
Misleading headline, with the qualifier buried six paragraphs deep. You have a wide enough readership (and well deserved too). Clickbait tactics feel a little out of place on your blog.
random3|6 days ago
The reason you pay attention to details is because complexity compounds and the cheapest cleanup is when you write something, not when it breaks.
This last part is still not fully fleshed out.
For now. Is there any reason to not expect things to improve further?
Regardless, a lot of code is cheap now and building products is fun regardless, but I doubt this will translate into more than very short-term benefits. When you lower the bar you get 10x more stuff, 10x more noise, etc. You lower it more you get 100x and so on.
unfunco|5 days ago
AyanamiKaine|5 days ago
I truly believe that LLMs are replacing tactical programming. Focusing on implementing features as fast as possible with not much regards to the overall complexity of a system.
Its more important then ever to focus on keeping complexity low at a system level.
ap99|5 days ago
If you just look at generation then sure it's super cheap now.
If you look at maintenance, it's still expensive.
You can of course use AI to maintain code, but the more of it there the more unwieldy it gets to maintain it even with the best models and harnesses.
kakacik|5 days ago
There are of course various use cases, for few this is an acceptable tradeoff but most software ain't written once and never touched (significantly) again, in contrary.
neilwilson|5 days ago
What you maintain is the specification harness, and change that to change the code.
We have to start thinking at a higher level, and see code generation in the same way we currently see compilation.
pmontra|5 days ago
So code is both cheaper (what the LLM wrote for me much faster than I could have typed it) and is also expensive (the only line that we deployed to production today.)
renegat0x0|5 days ago
With python I can write a simple debugging UI server with a few lines.
There are frameworks that allow me to complete certain tasks in hours.
You do not need to program everything from scratch.
The more code, the faster everything gets, since the job is mostly done.
We are accelerating, but we still work 9 to 5 jobs.
shiroiuma|5 days ago
skydhash|5 days ago
I think you got your history wrong. People didn’t program bit by bit. They programmed on paper (flowcharts, pseudo-code, diagrams,…), then encoded that afterwards. There was a lot of programming languages before C like Lisp and APL (which are high-level, btw). Why would they waste precious computer time, when you could plan out procedures on a notepad or a whiteboard.
zahlman|5 days ago
Boilerplate is boilerplate, whether filling it in is purely mechanical or benefits from an LLM's fuzzy logic.
deterministic|4 days ago
Nope not at all long term. That's the kind of code that leads to maintenance hell, very angry customers, and burned out developers.
mmsc|5 days ago
chasd00|5 days ago
thih9|5 days ago
Not as cheap as generating code of equivalent quality with an LLM.
wlruys|5 days ago
strogonoff|5 days ago
The former: 1) understand the problem, 2) solve the problem.
The latter: 1) understand the problem, 2) solve the problem, 3) understand how somebody or something else understood & solved the problem, 4) diff those two, 5) plan a transition from that solution to this solution, 6) implement that transition (ideally without unplanned downtime and/or catastrophic loss of data).
This is also why I’m not a fan of code reviews. Code review is basically steps 1–4 from the second approach, plus having to verbally explain the diff, every time.