top | item 45062312

(no title)

gyosko | 6 months ago

I really don't get it.

While letting the AI write some code can be cool and fascinating, I really can't undersand how:

- write the prompt(and you need do be precise and think and express carefully what you have in mind)

- check/try the code

- repeat

is better than writing the code by myself. AI coding like this feels like a nightmare to me and it's 100x more exhausting.

discuss

order

nbaugh1|6 months ago

For me, on small personal projects, I can get a project to a point in about 4 hours where previous to new AI tools it would’ve taken about 40. At work, there is a huge difference due to the complexity of the code base and services. Using agents to code for me in these cases as 100% been the loop of iterating on something so often, I would’ve been better off with a more hands on approach, essentially just reviewing PRs written by AI.

david-gpu|6 months ago

I bet some people felt the same way when we collectively moved from assembly to compilers.

gyosko|6 months ago

Yeah, if not only for the small fact that you are leaving a well defined set of rules for pure chaos and randomness this time around.

jplusequalt|6 months ago

In the previous scenario, programmers were still writing the code themselves. The compilers, if they were any good, generated deterministic code.

In our current scenario, programmers are merely describing what they think the code should do, and another program takes their description and then stochastically generates code based on it.

ModernMech|6 months ago

To some degree you're correct -- LLMs can be viewed as the kind of "sufficiently advanced" compiler we've always dreamed of. Our dreams didn't include the compiler lying to us though, so we have not achieved utopia.

LLMs are more like DNA transcription, where some percentage of the time it just injects a random mutation into the transcript, either causing am evolutionary advantage, or a terminal disease.

This whole AI industry right now is trying to figure out how to always get the good mutation, and I don't think it's something that can be controlled that way. It will probably turn out that on a long enough timescale, left unattended, LLMs are guaranteed to give your codebase cancer.

ModernMech|6 months ago

It's not. And people are realizing that, which is causing them to bring back and reinvent aspects of software engineering to AI coding to make it more tolerable. People once questioned whether AI will replace all programming languages with natural language interfaces, it now looks like programming languages will be reinvented in the context of AI to make their natural language interface more tool-like.

Rinum|6 months ago

It's a change in mindset. AI is like having your own junior developer. If you've never had direct reports before where you have to give them detailed instruction and validate their code then you're right, it might end up more exhausting than just doing the work yourself.

pavel_lishin|6 months ago

It definitely feels like a move towards management, which is something I've avoided for my entire career.

It's a perfectly cromulent approach and skillset - but it's a wildly different one.

broast|6 months ago

So basically what an engineering manager or product manager enjoys doing