top | item 46936279

(no title)

askonomm | 22 days ago

Seems like the author has a case of all or nothing. The real power in agentic programming, to me, is not in extremes, but in that you are still actively present. You don't give it world-size things to do, but byte-sized, and you constantly steer it. It's to be detailed enough to produce quality, and to be aware of everything it produces, but not so detailed that it makes sense to just write the code yourself. It's a delicate balance, but once you've found it, incredibly powerful. Especially mixed with deterministic self-checking tools (like some MCP's).

If you "set and forget", then you are vibe coding, and I do not trust for a second that the output is quality, or that you'd even know how that output fits into the larger system. You effectively delegate away the reason you are being paid onto the AI, so why pay you? What are you adding to the mix here? Your prompting skills?

Agentic programming to me is just a more efficient use of the tools I already used anyway, but it's not doing the thinking for me, it's just doing the _doing_ for me.

discuss

order

pdimitar|22 days ago

I am with you and fully agree with your "it does not have to be an all or nothing" stance. A remark on one part of your comment:

> What are you adding to the mix here? Your prompting skills?

The answer to that is an unironic and dead-serious "yes!".

My colleagues use Claude Opus and it does an okay job but misses important things occasionally. I've had one 18-hour session with it and fixed 3 serious but subtle and difficult to reproduce bugs. And fixed 6-7 flaky tests and our CI has been 100% green ever since.

Being a skilled operator is an actual billable skill IMO. And that will continue to be the case for a while unless the LLM companies manage to make another big leap.

I've personally witnessed Opus do world-class detective work. I even left it unattended and it churned away on a problem for almost 5h. But I spent an entire hour before that carefully telling it its success criteria, never to delete tests, never to relax requirements X & Y & Z, always to use this exact feedback loop when testing after it iterated on a fix, and a bunch of others.

In that ~5h session Opus fixed another extremely annoying bug and found mistakes in tests and corrected them after correcting the production code first and making new tests.

Opus can be scary good but you must not handwave anything away.

I found love for being an architect ever since I started using the newest generation [of scarily smart-looking] LLMs.

askonomm|22 days ago

Yup, totally! I'm also not against the evolution of software engineer to a software architect. We were on that direction already anyway with the ever increasing amount of abstraction in our libraries and tools. This also frees up my ability to do other things, like coordinate cross team efforts, deal with customer support issues, etc. As a generalist, I feel more useful and thus valuable than ever, and that makes me very happy.

krackers|22 days ago

> unless the LLM companies manage to make another big leap.

Why is it a big leap? If the behavior you want can already be elicited by models just with the right level prompting, it's something that can be trained toward. As a simple mental model you could for instance imagine training a verifier-type model of Claude that given a problem spits out a prompt detailing "its success criteria, never to delete tests, never to relax requirements X & Y & Z, always to use this exact feedback loop when testing after it iterated on a fix, and a bunch of others." Also things like specific feedback loop or agentic harnesses will also end up being trained in, similar to how Claude is specifically trained for use with Claude Code.

Thinking of "prompt engineering" as a skill is a fools game, these are language models after all. Do you really think you will hold an advantage over them in your ability to phrase things?