top | item 44445041

(no title)

alonsonic | 8 months ago

Yes, it's very polarized. That being said, people have shown a lot of code produced by LLMs so I don't understand the dismissive argument you make at the end.

Below is a link to a great article by Simon Willison explaining an LLM assisted workflow and the resulting coded tools.

[0] https://simonwillison.net/2025/Mar/11/using-llms-for-code/ [1] https://github.com/simonw/tools

discuss

order

OtherShrezzing|8 months ago

While I greatly appreciate all of Simon Willson's publishing, these tools don't meet the criteria of the OP's comment in my opinion. Willson's tools archive all do useful, but ultimately small tasks which mostly fit the "They're okay for one-off scripts or projects you do not intend to maintain" caveat from OP.

Meanwhile, it's not uncommon to see people on HN saying they're orchestrating multiple major feature implementations in parallel. The impression we get here is that Simon Willson's entire `tools` featureset could be implemented in a couple of hours.

I'd appreciate some links to the second set of people. Happy to watch YouTube videos or read more in-depth articles.

hedgehog|8 months ago

There's a third category I'd place myself in which is doing day to day work in shipping codebases with some history, using the tools to do a faster and better job of the work I'd do anyway. I think the net result is better code, and ideally on average less of it relative to the functionality because refactors are less expensive.

tptacek|8 months ago

Many big systems are comprised of tools that do a good job at solving small tasks, carefully joined. That LLMs are not especially good at that joinery just means that's a part of the building process that stays manual.

graemep|8 months ago

its really not that different.

"f you assume that this technology will implement your project perfectly without you needing to exercise any of your own skill you’ll quickly be disappointed."

"They’ll absolutely make mistakes—sometimes subtle, sometimes huge. These mistakes can be deeply inhuman—if a human collaborator hallucinated a non-existent library or method you would instantly lose trust in them"

"Once I’ve completed the initial research I change modes dramatically. For production code my LLM usage is much more authoritarian: I treat it like a digital intern, hired to type code for me based on my detailed instructions."

"I got lucky with this example because it helped illustrate my final point: expect to need to take over. LLMs are no replacement for human intuition and experience. "

unshavedyak|8 months ago

I've been experimenting with them quite a bit for the past two weeks. So far the best productivity i've found from them is very tight hand-holding and clear instructions, objectives, etc. Very, very limited thinking. Ideally none.

What that gets me though is less typing fatigue and less decisions made partly due to my wrists/etc. If it's a large (but simple!) refactor, the LLM generally does amazing at that. As good as i would do. But it does that with zero wrist fatigue. Things that i'd normally want to avoid or take my time on it bangs out in minutes.

This coupled with Claude Code's recently Hook[1] introduction and you can help curb a lot of behaviors that are difficult to make perfect from an LLM. Ie making sure it tests, formats, Doesn't include emojis (boy does it like that lol), etc.

And of course a bunch of other practices for good software in general make the LLMs better, as has been discussed on HN plenty of times. Eg testing, docs, etc.

So yea, they're dumb and i don't trust their "thinking" at all. However i think they have huge potential to help us write and maintain large codebases and generally multiplying out productivity.

It's an art for sure though, and restraint is needed to prevent slop. They will put out so. much. slop. Ugh.

[1]: https://docs.anthropic.com/en/docs/claude-code/hooks