top | item 46700274

(no title)

defatigable | 1 month ago

I use Augment with Claud Opus 4.5 every day at my job. I barely ever write code by hand anymore. I don't blindly accept the code that it writes, I iterate with it. We review code at my work. I have absolutely found a lot of benefit from my tools.

I've implemented several medium-scale projects that I anticipate would have taken 1-2 weeks manually, and took a day or so using agentic tools.

A few very concrete advantages I've found:

* I can spin up several agents in parallel and cycle between them. Reviewing the output of one while the others crank away.

* It's greatly improved my ability in languages I'm not expert in. For example, I wrote a Chrome extension which I've maintained for a decade or so. I'm quite weak in Javascript. I pointed Antigravity at it and gave it a very open-ended prompt (basically, "improve this extension") and in about five minutes in vastly improved the quality of the extension (better UI, performance, removed dependencies). The improvements may have been easy for someone expert in JS, but I'm not.

Here's the approach I follow that works pretty well:

1. Tell the agent your spec, as clearly as possible. Tell the agent to analyze the code and make a plan based on your spec. Tell the agent to not make any changes without consulting you.

2. Iterate on the plan with the agent until you think it's a good idea.

3. Have the agent implement your plan step by step. Tell the agent to pause and get your input between each step.

4. Between each step, look at what the agent did and tell it to make any corrections or modifications to the plan you notice. (I find that it helps to remind them what the overall plan is because sometimes they forget...).

5. Once the code is completed (or even between each step), I like to run a code-cleanup subagent that maintains the logic but improves style (factors out magic constants, helper functions, etc.)

This works quite well for me. Since these are text-based interfaces, I find that clarity of prose makes a big difference. Being very careful and explicit about the spec you provide to the agent is crucial.

discuss

order

marcus_holmes|1 month ago

This. I use it for coding in a Rails app when I'm not a Ruby expert. I can read the code, but writing it is painful, and so having the LLM write the code is beneficial. It's definitely faster than if I was writing the code, and probably produces better code than I would write.

I've been a professional software developer for >30 years, and this is the biggest revolution I've seen in the industry. It is going to change everything we do. There will be winners and losers, and we will make a lot of mistakes, as usual, but I'm optimistic about the outcome.

defatigable|1 month ago

Agreed. In the domains where I'm an expert, it's a nice productivity boost. In the domains where I'm not, it's transformative.

As a complete aside from the question of productivity, these coding tools have reawakened a love of programming in me. I've been coding for long enough that the nitty gritty of everyday programming just feels like a slog - decrypting compiler errors, fixing type checking issues, factoring out helper functions, whatever. With these tools, I get to think about code at a much higher level. I create designs and high level ideas and the AI does all the annoying detail work.

I'm sure there are other people for whom those tasks feel like an interesting and satisfying puzzle, but for me it's been very liberating to escape from them.

jesse__|1 month ago

> I've implemented several medium-scale projects that I anticipate would have taken 1-2 weeks manually

A 1-week project is a medium-scale project?! That's tiny, dude. A medium project for me is like 3 months of 12h days.

defatigable|1 month ago

You are welcome to use whatever definition of "small/medium/large" you like. Like you, 1-2 weeks is also far from the largest project I've worked on. I don't think that's particularly relevant to the point of my post.

The point that I'm trying to emphasize is that I've had success with it on projects of some scale, where you are implementing (e.g.) multiple related PRs in different services. I'm not just using it on very tightly scoped tasks like "implement this function".

drewstiff|1 month ago

Well a medium project for me takes 3 years, so obviously I am the best out of everyone /s

monkeydust|1 month ago

1. And 2. I.e. creating a spec which is the source of truth (or spec driven development) is key to getting anything production grade from our experience.

defatigable|1 month ago

Yes. This was the key thing I learned that let me set the agents loose on larger tasks. Before I started iterating on specs with them, I mostly had them doing very small scale, refactor-this-function style tasks.

The other advice I've read that I haven't yet internalized as much is to use an "adversarial" approach with the LLMs: i.e. give them a rigid framework that they have to code against. So, e.g., generate tests that the code has to work against, or sample output that the code has to perfectly match. My agents do write tests as part of their work, and I use them to verify correctness, but I haven't updated my flow to emphasize that the agents should start with those, and iterate on them before working on the main implementation.

laserlight|1 month ago

I wouldn't consider the proposed workflow agentic. When you review each step, give feedback after each step, it's simply development with LLMs.

defatigable|1 month ago

Interesting. What would make the workflow "agentic" in your mind? The AI implementing the task fully autonomously, never getting any human feedback?

To me "agentic" in this context essentially that the LLM has the ability to operate autonomously, so execute tools on my behalf, etc. So for example my coding agents will often run unit tests, run code generation tools, etc. I've even used my agents to fix issues with git pre-commit hooks, in which case they've operated in a loop, repeatedly trying to check in code and fixing errors they see in the output.

So in that sense they are theoretically capable of one-shot implementing any task I set them to, their quality is just not good enough yet to trust them to. But maybe you mean something different?

mountainriver|1 month ago

Same, Opus 4.5 is nothing short of amazing. I’m really shocked to see so many posts claiming it doesn’t work.

We write whole full scale Rust SaaS apps with few regressions.

I do novel machine learning research in about a 1/10 of the time it would have taken me.

A big thing is telling it to excessively log so it can see the execution

tkgally|1 month ago

Great advice.

> Tell the agent your spec, as clearly as possible.

I have recently added a step before that when beginning a project with Claude Code: invoke the AskUserQuestionTool and have it ask me questions about what I want to do and what approaches I prefer. It helps to clarify my thinking, and the specs it then produces are much better than if I had written them myself.

I should note, though, that I am a pure vibe coder. I don't understand any programming language well enough to identify problems in code by looking at it. When I want to check whether working code produced by Claude might still contain bugs, I have Gemini and Codex check it as well. They always find problems, which I then ask Claude to fix.

None of what I produce this way is mission-critical or for commercial use. My current hobby project, still in progress, is a Japanese-English dictionary:

https://github.com/tkgally/je-dict-1

https://www.tkgje.jp/

defatigable|1 month ago

Great idea! That's actually the very next improvement I was planning on making to my coding flow: building a sub agent that is purely designed to study the codebase and create a structured implementation plan. Every large project I work on has the same basic initial steps (study the codebase, discuss the plan with me, etc) so it makes sense to formalize this in an agent I specialize for the purpose.

solaris2007|1 month ago

[deleted]

djmips|1 month ago

"Be kind. Don't be snarky. Converse curiously; don't cross-examine. Edit out swipes"

molteanu|1 month ago

That's a very good point.

The OP is "quite weak at JavaScript" but their AI "vastly improved the quality of the extension." Like, my dude, how can you tell? Does the code look polished, it looks smart, the tests pass, or what?! How can you come forward and be the judge of something you're not an expert in?

I mean, at this point, I'm beginning to be skeptical about half the content posted online. Anybody can come up with any damn story and make it credible. Just the other day I found out about reddit engagement bots, and I've seen some in the wild myself.

I'm waiting for the internet bubble to burst already so we can all go back to our normal lives, where we've left it 20 years or so ago.

defatigable|1 month ago

I've never had a job where writing Javascript has been the primary language (so far it's been C++/Java/Golang). The JS Chrome Extension is a fun side project. Using Augment in a work context, I'm primarily using it for Golang and Python code, languages where I'm pretty proficient but AI tools give me a decent efficiency boost.

I understand the emotional satisfaction of letting loose an easy snarky comment, of course, but you missed the mark I'm afraid.