(no title)
gchamonlive | 7 days ago
When I started coding professionally, I joined a team of only interns in a startup, hacking together a SaaS platform that had relative financial success. While we were very cheap, being paid below minimum wage, we had outages, data corruption, db wipes, server terminations, unresolved conflicts making their way to production and killing features, tons of tech debt and even more makeshift code we weren't aware of...
So yeah, while writing code was cheap, the result had a latent cost that would only show itself on occasion.
So code was always expensive, the challenge was to be aware of how expensive sooner rather than later.
The thing with coding agents is that it seems now that you can eat your cake and have it too. We are all still adapting, but results indicate that given the right prompts and processes harnessing LLMs quality code can be had in the cheap.
ryanackley|7 days ago
It's cheaper but not cheap
If you're building a variation of a CRUD web app, or aggregating data from some data source(s) into a chart or table, you're right. It's like magic. I never thought this type of work was particularly hard or expensive though.
I'm using frontier models and I've found if you're working on something that hasn't been done by 100,000 developers before you and published to stackoverflow and/or open source, the LLM becomes a helpful tool but requires a ton of guidance. Even the tests LLMs will write seem biased to pass rather than stress its code and find bugs.
gchamonlive|6 days ago
It's quite cheap if you consider developer time. But it's only as cheap as you can effectively drive the model, otherwise you are just wasting tokens on garbage code.
> LLM becomes a helpful tool but requires a ton of guidance
I think this is always going to be the case. You are driving the agent like you drive a bike, it'll get you there but you need to be mindful of the clueless kid crossing your path.
For some projects I had good results just letting the agent loose. For others I'd have to make the tasks more specific and granular before offloading to the LLM. I see nothing wrong with it.
zahlman|6 days ago
Maybe not intrinsically hard, but hard because it's so boring you can't concentrate.
> the LLM becomes a helpful tool but requires a ton of guidance. Even the tests LLMs will write seem biased to pass rather than stress its code and find bugs.
ISTR some have had success by taking responsibility for the tests and only having the LLM work on the main code. But since I only seem to recall it, that was probably a while ago, so who knows if it's still valid.
lp4v4n|7 days ago
Now with LLMs, code is cheap and it also has quality, therefore "quality code can be had in the cheap".
Do you really believe this is the case? Why don't companies fire all their developers if they can have an algorithm that can output cheap and quality code?
gchamonlive|6 days ago
But the thing is, there are many unknowns. We humans are very capable of adapting as we go. LLMs have a fixed data they were trained on and prompt engineering can only get you so far.
I think anyone asking this with the intention of actually replacing humans with LLMs don't really understand neither humans nor LLMs. They are just talking money.
nthj|7 days ago
Many enterprises are currently exploring to see if they can invite developers to leverage AI tools—like they leveraged the compiler—to be more productive. To operate on a higher plane of agency, collaborating on what we should be building and not just technical execution. Those actively hostile or just checked out with the idea of relearning skills are being laid off. (Some unprofitable business sections are being swept up opportunistically too.) The idea that all developers would be fired if AI tools can write good code doesn’t meet the lessons of history
fragmede|7 days ago
AyanamiKaine|7 days ago
I know that things like “clean code” exists but I always felt that actual code quality only shows when you try adding or changing existing code. Not by looking at it.
And the ability to judge code quality on a system scale is something I don’t think LLMs can do. But they may support developers in their judgment.
simonw|7 days ago
Because it takes an experienced developer to get the machine to output cheap and quality code well enough to be useful.
That developer is just a whole lot more valuable now, because they can do more work at a higher quality.
unknown|7 days ago
[deleted]