top | item 46942594

(no title)

codebolt | 21 days ago

> People aren't prompting LLMs to write good, maintainable code though.

Then they're not using the tools correctly. LLMs are capable of producing good clean code, but they need to be carefully instructed as to how.

I recently used Gemini to build my first Android app, and I have zero experience with Kotlin or most of the libraries (but I have done many years of enterprise Java in my career). When I started I first had a long discussion with the AI about how we should set up dependency injection, Material3 UI components, model-view architecture, Firebase, logging, etc and made a big Markdown file with a detailed architecture description. Then I let the agent mode implement the plan over several steps and with a lot of tweaking along the way. I've been quite happy with the result, the app works like a charm and the code is neatly structured and easy to jump into whenever I need to make changes. Finishing a project like this in a couple of dozen hours (especially being a complete newbie to the stack) simply would not have been possible 2-3 years ago.

discuss

order

ben_w|21 days ago

> Then they're not using the tools correctly. LLMs are capable of producing good clean code, but they need to be carefully instructed as to how.

I'd argue that when the code is part of a press release or corporate blog post (is there even a difference?) by the company that the LLM in question comes from, e.g. Claude's C compiler, then one cannot reasonably assert they were "not using the tools correctly": even if there's some better way to use them, if even the LLM's own team don't know how to do that, the assumption should be that it is unreasonable to expect anyone else to how to do that either.

I find it interesting and useful to know that the boundary of the possible is a ~100kloc project, and that even then this scale of output comes with plenty of flaws.

Know what the AI can't do, rather than what it can. Even beyond LLMs, people don't generally (there's exceptions) get paid for manually performing tasks that have already been fully automated, people get paid for what automation can't do.

Moving target, of course. This time last year, my attempt to get an AI to write a compiler for a joke language didn't even result in the source code for the compiler itself compiling; now it not only compiles, it runs. But my new language is a joke language, no sane person would ever use it for a serious project.

onion2k|21 days ago

When I started I first had a long discussion with the AI... and made a big Markdown file with a detailed architecture description.

Yep, that's how you get better output from AI. A lot of devs haven't learned that yet. They still see it as 'better autocomplete'.

tossandthrow|21 days ago

While this technique works for new projects, it takes no more than a couple of pivots for it to completely fail.

A good AI development framework needs to support a tail of deprecated choices in the codebase.

Skills are considerable better for this than design docs.

troupo|21 days ago

"It's just another Markdown file, bro".

LLMs do not learn. So every new session for them will be rebuilding the world from scratch. Bloated Markdown files quickly exhaust context windows, and agents routinely ignore large parts of them.

And then you unleash them on one code base that's more than a couple of days old, and they happily duplicate code, ignore existing code paths, ignore existing conventions etc.

sirwitti|21 days ago

Not trying to be rude, but in a technology you're not familiar with you might not be able to know what good code is, and even less so if it's maintainable.

Finding and fixing that subtle, hard to reproduce bug that could kill your business after 3 years.

codebolt|21 days ago

That's a fair point, my code is likely to have some warts that an experienced Android/Kotlin dev would wince at. All I know is that the app has a structure that makes an overall sense to me, with my 15+ years of experience as a professional developer and working with many large codebases.

fauigerzigerk|21 days ago

I think we are going to have to find out what maintenance even looks like when LLMs are involved. "Maintainable" might no longer mean quite the same thing as it used to.

But it's not going to be as easy as "just regenerate everything". There are dependencies external to a particular codebase such as long lived data and external APIs.

I also suspect that the stability of the codebase will still matter, maybe even more so than before. But the way in which we define maintainability will certainly change.

fragmede|21 days ago

The framing is key here. Is three years a long time? Both answers are right. Just getting a business off the ground is an achievement in the first place. Lasting three years? These days, I have clothes that don't even last that long. And then three years isn't very long at all. Bridges last decades. Countries are counted by centuries. Humanity is a millennia old. If AI can make me a company that's solvent for three years? Well, you decide.

generallyjosh|20 days ago

That mirrors my experience so far. The AI is fantastic for prototyping, in languages/frameworks you might be totally unfamiliar with. You can make all sorts of cool little toy projects in a few hours, with just some minimal promoting

The danger is, it doesn't quite scale up. The more complex the project, the more likely the AI is to get confused and start writing spaghetti code. It may even work for a while, but eventually the spaghetti piles up to the point that not even more spaghetti will fix it

I'll get that's going to get better over the next few years, with better tooling and better ways to get the AI to figure out/remember relevant parts of the code base, but that's just my guess