top | item 45108930

(no title)

swframe2 | 6 months ago

Preventing garbage just requires that you take into account the cognitive limits of the agent. For example ...

1) Don't ask for large / complex change. Ask for a plan but ask it to implement the plan in small steps and ask the model to test each step before starting the next.

2) For really complex steps, ask the model to write code to visualize the problem and solution.

3) If the model fails on a given step, ask it to add logging to the code, save the logs, run the tests and the review the logs to determine what went wrong. Do this repeatedly until the step works well.

4) Ask the model to look at your existing code and determine how it was designed to implement a task. Some times the model will put all of the changes in one file but your code has a cleaner design the model doesn't take into account.

I've seen other people blog about their tricks and tips. I do still see garbage results but not as high as 95%.

discuss

order

rco8786|6 months ago

I feel like I do all of this stuff and still end up with unusable code in most cases, and the cases where I don't I still usually have to hand massage it into something usable. Sometimes it gets it right and it's really cool when it does, but anecdotally for me it doesn't seem to be making me any more efficient.

enobrev|6 months ago

> it doesn't seem to be making me any more efficient

That's been my experience.

I've been working on a 100% vibe-coded app for a few weeks. API, React-Native frontend, marketing website, CMS, CI/CD - all of it without changing a single line of code myself. Overall, the resulting codebase has been better than I expected before I started. But I would have accomplished everything it has (except for the detailed specs, detailed commit log, and thousands of tests), in about 1/3 of the time.

jaggederest|6 months ago

The key is prompting. Prompt to within an inch of your life. Treat prompts as source code - edit them in files, use @ notation to bring them into the console. Use Claude to generate its own prompts - https://github.com/wshobson/commands/ and https://github.com/wshobson/agents/ are very handy, they include a prompt-engineer persona.

I'm at the point now where I have to yell at the AI once in a while, but I touch essentially zero code manually, and it's acceptable quality. Once I stopped and tried to fully refactor a commit that CC had created, but I was only able to make marginal improvements in return for an enormous time commitment. If I had spent that time improving my prompts and running refactoring/cleanup passes in CC, I suspect I would have come out ahead. So I'm deliberately trying not to do that.

I expect at some point on a Friday (last Friday was close) I will get frustrated and go build things manually. But for now it's a cognitive and effort reduction for similar quality. It helps to use the most standard libraries and languages possible, and great tests are a must.

Edit: Also, use the "thinking" commands. think / think hard / think harder / ultrathink are your best friend when attempting complicated changes (of course, if you're attempting complicated changes, don't.)

nostrademons|6 months ago

I've found that an effective tactic for larger, more complex tasks is to tell it "Don't write any code now. I'm going to describe each of the steps of the problem in more detail. The rough outline is going to be 1) Read this input 2) Generate these candidates 3) apply heuristics to score candidates 4) prioritize and rank candidates 5) come up with this data structure reflecting the output 6) write the output back to the DB in this schema". Claude will then go and write a TODO list in the code (and possibly claude.md if you've run /init), and prompt you for the details of each stage. I've even done this for an hour, told Claude "I have to stop now. Generate code for the finished stages and write out comments so you can pick up where you left off next time" and then been able to pick up next time with minimal fuss.

hex4def6|6 months ago

FYI: You can force "Plan mode" by pressing shift-tab. That will prevent it from eagerly implementing stuff.

yahoozoo|6 months ago

How does a token predictor “apply heuristics to score candidates”? Is it running a tool, such as a Python script it writes for scoring candidates? If not, isn’t it just pulling some statistically-likely “score” out of its weights rather than actually calculating one?

plaguuuuuu|6 months ago

I've been using a few LLMs/agents for a while and I still struggle with getting useful output from it.

In order for it not to do useless stuff I need to expend more energy on prompting than writing stuff myself. I find myself getting paranoid about minutia in the prompt, turns of phrase, unintended associations in case it gives shit-tier code because my prompt looked too much like something off experts-exchange or whatever.

What I really want is something like a front-end framework but for LLM prompting, that takes away a lot of the fucking about with generalised stuff like prompt structure, default to best practices for finding something in code, or designing a new feature, or writing tests..

Mars008|6 months ago

> What I really want is something like a front-end framework but for LLM prompting

It's not simple to even imagine ideal solution. The more you think about it the more complicated your solution becomes. Simple solution will be restricted to your use cases. Generic is either visual or a programming language. I's like to have visual constructor, graph of actions, but it's complicated. The language is more powerful.

dontlaugh|6 months ago

At that point, why not just write the code yourself?

lucasyvas|6 months ago

I reached this conclusion pretty quickly. With all the hand holding I can write it faster - and it’s not bragging, almost anyone experienced here could do the same.

Writing the code is the fast and easy part once you know what you want to do. I use AI as a rubber duck to shorten that cycle, then write it myself.

kyleee|6 months ago

Partly it seems to be less taxing for the human delivering the same amount of work. I find I can chat with Claude, etc and work more. Which is a double edged sword obviously when it comes to work/life balance etc. But also I am less mentally exhausted from day job and able to enjoy programming and side projects again.

harrall|6 months ago

I don’t do much of the deep prompting stuff but I find AI can write some code faster than I can and accurately most of the time. You just need to learn what those things are.

But I can’t tell you any useful tips or tricks to be honest. It’s like trying to teach a new driver the intuition of knowing when to brake or go when a traffic light turns yellow. There’s like nothing you can really say that will be that helpful.

utyop22|6 months ago

I'm finding what's happening right now kinda bizarre.

The funny thing is - we need less. Less of everything. But an up-tick in quality.

This seems to happen with humans with everything - the gates get opened, enabling a flood of producers to come in. But this causes a mountain of slop to form, and overtime the tastes of folks get eroded away.

Engineers don't need to write more lines of code / faster - they need to get better at interfacing with other folks in the business organisation and get better at project selection and making better choices over how to allocate their time. Writing lines of code is a tiny part of what it takes to get great products to market and to grow/sustain market share etc.

But hey, good luck with that - ones thinking power is diminished overtime by interacing with LLMs etc.

MangoCoffee|6 months ago

I've been vibe coding a couple of personal projects. I've found that test-driven development fits very well with vibe coding, and it's just as you said break up the problem into small, testable chunks, get the AI to write unit tests first, and then implement the actual code

yodsanklai|6 months ago

Actually, all good engineering principles which reduce cognitive load for humans work for AI as well.

alexsmirnov|6 months ago

TDD is exactly that I unable to get from AI tools. Probably, because training sets always have both code and tests. I tried multiply models from all major providers, and all failed to create tests without seen the code. One workflow that helps is to create dirty implementation and generate tests for it. Then throw away the first code and use different model for final implementation.

The best way is to create tests yourself, and block any attempts to modify them

MarkMarine|6 months ago

Works great until it’s stuck and it starts just refactoring the tests to say true == true and calling it a day. I want the inverse of black box testing, like the inside of the box has the model in it with the code and it’s not allowed to reach outside the box and change the grades. Then I can just do the Ralph Wiggum as a software engineer loop to get over the reward hacking tendencies

jason_zig|6 months ago

I've seen people post this same advice and I agree with you that it works but you would think they would absorb this common strategy and integrate it as part of the underlying product at this point...

noosphr|6 months ago

The people who build the models don't understand how to use the models. It's like asking people who design CPUs to build data-centers.

I've interviewed with three tier one AI labs and _no-one_ I talked to had any idea where the business value of their models came in.

Meanwhile Chinese labs are releasing open source models that do what you need. At this point I've build local agentic tools that are better than anything Claude and OAI have as paid offerings, including the $2,000 tier.

Of course they cost between a few dollars to a few hundred dollars per query so until hardware gets better they will stay happily behind corporate moats and be used by the people blessed to burn money like paper.

nostrademons|6 months ago

A lot of it is integrated into the product at this point. If you have a particularly tricky bug, you can just tell Claude "I have this bug. I expected output 'foo' and got output 'bar'. What went wrong?" It will inspect the code and sometimes suggest a fix. If you run it and it still doesn't work, you can say "Nope, still not working", and Claude will add debug output to the whole program, tell you to run it again, and paste the debug output back into the console. Then it will use your example to write tests, and run against them.

tombot|6 months ago

Claude Code at least now lets you use its best model for planning mode and its cheapest model for coding mode.

MikeTheGreat|6 months ago

Genuine question: What do you mean by " ask it to implement the plan in small steps"?

One option is to write "Please implement this change in small steps?" more-or-less exactly

Another option is to figure out the steps and then ask it "Please figure this out in small steps. The first step is to add code to the parser so that it handles the first new XML element I'm interested in, please do this by making the change X, we'll get to Y and Z later"

I'm sure there's other options, too.

Benjammer|6 months ago

My method is that I work together with the LLM to figure out the step-by-step plan.

I give an outline of what I want to do, and give some breadcrumbs for any relevant existing files that are related in some way, ask it to figure out context for my change and to write up a summary of the full scope of the change we're making, including an index of file paths to all relevant files with a very concise blurb about what each file does/contains, and then also to produce a step-by-step plan at the end. I generally always have to tell it to NOT think about this like a traditional engineering team plan, this is a senior engineer and LLM code agent working together, think only about technical architecture, otherwise you get "phase 1 (1-2 weeks), phase 2 (2-4 weeks), step a (4-8 hours)" sort of nonsense timelines in your plan. Then I review the steps myself to make sure they are coherent and make sense, and I poke and prod the LLM to fix anything that seems weird, either fixing context or directions or whatever. Then I feed the entire document to another clean context window (or two or three) and ask it to "evaluate this plan for cohesiveness and coherency, tell me if it's ready for engineering or if there's anything underspecified or unclear" and iterate on that like 1-3 times until I run a fresh context window and it says "This plan looks great, it's well crafted, organized, etc...." and doesn't give feedback. Then I go to a fresh context window and tell it "Review the document @MY_PLAN.md thoroughly and begin implementation of step 1, stop after step 1 before doing step 2" and I start working through the steps with it.

conception|6 months ago

I tell it to generate a todo.md file with hyper atomic todos each requiring 20 loc or less. Then have it go through that. If the change is too big, generate phases (5-25) and then do the todos for each phase. That plus some sort of reference docs/high level plan keeps it going along all right.

ants_everywhere|6 months ago

What I do is a step is roughly a reviewable commit.

So I'll say something like "evaluate the URL fetcher library for best practices, security, performance, and test coverage. Write this up in a markdown file. Add a design for single flighting and retry policy. Break this down into steps so simple even the dumbest LLM won't get confused.

Then I clear the context window and spawn workers to do the implementation.

com2kid|6 months ago

> 1) Don't ask for large / complex change. Ask for a plan but ask it to implement the plan in small steps and ask the model to test each step before starting the next.

I asked Claude Code to read a variable from a .env file.

It proceeded to write a .env parser from scratch.

I then asked it to just use Node's built in .env file parsing....

This was the 2nd time in the same session that it wrote a .env file parser from scratch. :/

Claude Code is amazing, but it'll goes off and does stupid even for simple requests.

NitpickLawyer|6 months ago

Check your settings, they might be unable to read .env files as a guardrail.

theshrike79|6 months ago

It doesn't say no.

For me it built a full-ass YAML parser when it couldn't use Viper to parse the configuration correctly :)

It was a fully vibe-coded project (I like playing stupid and seeing what the LLM does), but it got caught when the config got a bit more complex and its shitty regex-yaml-parser didn't work anymore. :)

ants_everywhere|6 months ago

IMO by far the best improvement would be to make it easier for the agent to force the agent to use a success criterion.

Right now it's not easy prompting claude code (for example) to keep fixing until a test suite passes. It always does some fixed amount of work until it feels it's most of the way there and stops. So I have to babysit to keep telling it that yes I really mean for it to make the tests pass.

adastra22|6 months ago

This is why the jobs market for new grads and early career folks has dried up. A seasoned developer knows that this is how you manage work in general, and just treats the AI like they would a junior developer—and gets good results.

CuriouslyC|6 months ago

Why bother handing stuff to a junior when an agent will do it faster while asking fewer questions, and even if the first draft code isn't amazing, you can just quality gate with an LLM reviewer that has been instructed to be brutal and do a manual pass when the code gets by the LLM reviewer.

paulcole|6 months ago

> Ask for a plan but ask it to implement the plan in small steps and ask the model to test each step before starting the next.

Tried this on a developer I worked with once and he just scoffed at me and pushed to prod on a Friday.

NitpickLawyer|6 months ago

> scoffed at me and pushed to prod on a Friday.

that's the --yolo flag in cc :D

rvnx|6 months ago

Your tips are perfect.

Most users will just give a vague tasks like: "write a clone of Steam" or "create a rocket" and then they blame Claude Code.

If you want AI to code for you, you have to decompose your problem like a product owner would do. You can get helped by AI as well, but you should have a plan and specifications.

Once your plan is ready, you have to decompose the problem into different modules, then make sure each modules are tested.

The issue is often with the user, not the tool, as they have to learn how to use the tool first.

wordofx|6 months ago

> Most users will just give a vague tasks like: "write a clone of Steam" or "create a rocket" and then they blame Claude Code.

This seems like half of HN with how much HN hates AI. Those who hate it or say it’s not useful to them seem to be fighting against it and not wanting to learn how to use it. I still haven’t seen good examples of it not working even with obscure languages or proprietary stuff.

ccorcos|6 months ago

Seems like this logic could all be represented in Claude.md and some agents. Has anyone done this? I’d love to just import that into my project because I’m using some of these tactics but it’s fairly manual and tedious.

biggc|6 months ago

Thin sounds a lot like making a change yourself.

therein|6 months ago

It appeals to some people because they'd rather manage a bot and get it to do something they told it to do rather than do it themselves.

rmonvfer|6 months ago

I’d like to add: keep some kind of development documentation where you describe in detail the patterns and architecture of your application and it’s components.

I’ve seen incredible improvements just by doing this and using precise prompting to get Claude to implement full services by itself, tests included. Of course it requires manual correction later but just telling Claude to check the development documentation before starting work on a feature prevents most hallucinations (that and telling it to use the Context7 MCP for external documentation), at least in my experience.

The downside to this is that 30% of your context window will be filled with documentation but hey, at least it won’t hallucinate API methods or completely forget that it shouldn’t reimplement something.

Just my 2 cents.

salty_frog|6 months ago

This is my algorithm for wetware llms.

whateveracct|6 months ago

that sounds like just coding it yourself with extra steps

baq|6 months ago

Exactly, then you launch ten copies of yourself and write code to manage that yourself, maybe.

renegat0x0|6 months ago

Huh, I thought that AI was made to be magic. Click and it generates code. Turns out it is like magic, but you are an apprentice, and still have to learn how to wield it.

dotancohen|6 months ago

All sufficiently advanced technology...