top | item 46859877

(no title)

adam_arthur | 27 days ago

LLMs have clearly accelerated development for the most skilled developers.

Particularly when the human acts as the router/architect.

However, I've found Claude Code and Co only really work well for bootstrapping projects.

If you largely accept their edits unchanged, your codebase will accrue massive technical debt over time and ultimately slow you down vs semi-automatic LLM use.

It will probably change once the approach to large scale design gets more formalized and structured.

We ultimately need optimized DSLs and aggressive use of stateless sub-modules/abstractions that can be implemented in isolation to minimize the amount of context required for any one LLM invocation.

Yes, AI will one shot crappy static sites. And you can vibe code up to some level of complexity before it falls apart or slows dramatically.

discuss

order

lowbloodsugar|27 days ago

>If you largely accept their edits unchanged, your codebase will accrue massive technical debt over time and ultimately slow you down vs semi-automatic LLM use.

Worse, as its planning the next change, it's reading all this bad code that it wrote before, but now that bad code is blessed input. It writes more of it, and instructions to use a better approach are outweighed by the "evidence".

Also, it's not tech debt: https://news.ycombinator.com/item?id=27990979#28010192

adam_arthur|27 days ago

People can take on debt for all sorts of things. To go on vacation, to gamble.

Debt doesn't imply it's productively borrowed or intelligently used. Or even knowingly accrued.

So given that the term technical debt has historically been used, it seems the most appropriate descriptor.

If you write a large amount of terrible code and end up with a money producing product, you owe that debt back. It will hinder your business or even lead to its collapse. If it were quantified in accounting terms, it would be a liability (though the sum of the parts could still be net positive)

Most "technical debt" is not buying the code author anything and is materialized through negligence rather than intelligently accepting a tradeoff

Sohcahtoa82|27 days ago

Agreed.

What I've found is that AI can be alright at creating a Proof of Concept for an app idea, and it's great as a Super Auto-complete, but anything with a modicum of complexity, it simply can't handle.

When your code is hundreds of thousands of lines, asking an agent to fix a bug or implement a feature based on a description of the behavior just doesn't work. The AI doesn't work on call graphs, it basically just greps for strings it thinks might be relevant to find things. If you know exactly where the bug lies, it can usually find it with context given to it, but at that point, you're just as good fixing the bug yourself rather than having the AI do it.

The problem is that you have non-coders creating a PoC, then screaming from the rooftops how amazing AI is and showing off what it's done, but then they go quiet as the realization sets in that they can't get the AI to flesh it out into a viable product. Alternatively, they DO create a product that people start paying to use, and then they get hacked because the code is horribly insecure and hard-codes API keys.

athenot|27 days ago

> We ultimately need optimized DSLs and aggressive use of stateless sub-modules/abstractions that can be implemented in isolation to minimize the amount of context required for any one LLM invocation.

Containment of state also happens to benefit human developers too, and keep complexity from exploding.

adam_arthur|27 days ago

Yes!

I've found the same principles that apply to humans apply to LLMs as well.

Just that the agentic loops in these tools aren't (currently) structured and specific enough in their approach to optimally bound abstractions.

At the highest level, most applications can be written in simple, plain english (expressed via function names). Both humans and LLMs will understand programs much better when represented this way

AndreasMoeller|26 days ago

The most interesting thing for me is that I am sure it does.

I have been coding for 20+ years and I have used AI agents for coding a lot, especially for the last month and a half. I can't say for sure they make me faster.They definitely do for some tasks, but over all? I can solve some tasks really quickly, but at the same time my understanding of the code is not as good as it was before. I am much less confident that is is correct.

LLMs clearly make junior and mid level engineers faster, but it is much harder to say for Senior.

krainboltgreene|27 days ago

> LLMs have clearly accelerated development for the most skilled developers.

Have they so clearly? What's the evidence?

thegrim000|27 days ago

Most people's "truth" nowadays is what they've heard enough people say is true. Not objective data/measures. What people believe is true, and say is true, IS truth, to them.

themafia|27 days ago

> accrue massive technical debt

The primary difference between a programmer and an engineer.

sjdixjjxs|27 days ago

> We ultimately need optimized DSLs and aggressive use of stateless sub-modules/abstractions that can be implemented in isolation to minimize the amount of context required for any one LLM invocation

Wait till you find out about programming languages and libraries!

> It will probably change once the approach to large scale design gets more formalized and structured

This idea has played out many times over the course of programming history. Unfortunately, reality doesn’t mesh with our attempts to generalize.