top | item 46255285

Ask HN: How can I get better at using AI for programming?

471 points| lemonlime227 | 3 months ago | reply

I've been working on a personal project recently, rewriting an old jQuery + Django project into SvelteKit. The main work is translating the UI templates into idiomatic SvelteKit while maintaining the original styling. This includes things like using semantic HTML instead of div-spamming, not wrapping divs in divs in divs, and replacing bootstrap with minimal tailwind. It also includes some logic refactors, to maintain the original functionality but rewritten to avoid years of code debt. Things like replacing templates using boolean flags for multiple views with composable Svelte components.

I've had a fairly steady process for doing this: look at each route defined in Django, build out my `+page.server.ts`, and then split each major section of the page into a Svelte component with a matching Storybook story. It takes a lot of time to do this, since I have to ensure I'm not just copying the template but rather recreating it in a more idiomatic style.

This kind of work seems like a great use case for AI assisted programming, but I've failed to use it effectively. At most, I can only get Claude Code to recreate some slightly less spaghetti code in Svelte. Simple prompting just isn't able to get AI's code quality within 90% of what I'd write by hand. Ideally, AI could get it's code to something I could review manually in 15-20 minutes, which would massively speed up the time spent on this project (right now it takes me 1-2 hours to properly translate a route).

Do you guys have tips or suggestions on how to improve my efficiency and code quality with AI?

469 comments

order
[+] bcherny|3 months ago|reply
Hey, Boris from the Claude Code team here. A few tips:

1. If there is anything Claude tends to repeatedly get wrong, not understand, or spend lots of tokens on, put it in your CLAUDE.md. Claude automatically reads this file and it’s a great way to avoid repeating yourself. I add to my team’s CLAUDE.md multiple times a week.

2. Use Plan mode (press shift-tab 2x). Go back and forth with Claude until you like the plan before you let Claude execute. This easily 2-3x’s results for harder tasks.

3. Give the model a way to check its work. For svelte, consider using the Puppeteer MCP server and tell Claude to check its work in the browser. This is another 2-3x.

4. Use Opus 4.5. It’s a step change from Sonnet 4.5 and earlier models.

Hope that helps!

[+] epolanski|3 months ago|reply
> If there is anything Claude tends to repeatedly get wrong, not understand, or spend lots of tokens on, put it in your CLAUDE.md. Claude automatically reads this file and it’s a great way to avoid repeating yourself.

Sure, for 4/5 interactions then will ignore those completely :)

Try for yourself: add to CLAUDE.md an instruction to always refer to you as Mr. bcherny and it will stop very soon. Coincidentally at that point also loses tracks of all the other instructions.

[+] keepamovin|3 months ago|reply
This is cool, thank you!

Some things I found from my own interactions across multiple models (in addition to above):

- It's basically all about the importance of (3). You need a feedback loop (we all do). and the best way is for it to change things and see the effects (ideally also against a good baseline like a test suite where it can roughly guage how close or far it is from the goal.) For assembly, a debugger/tracer works great (using batch-mode or scripts as models/tooling often choke on such interactivie TUI io).

- If it keeps missing the mark tell it to decorate the code with a file log recording all the info it needs to understand what's happening. Its analysis of such logs normally zeroes the solution pretty quickly, especially for complex tasks.

- If it's really struggling, tell it to sketch out a full plan in pseudocode, and explain why that will work, and analyze for any gotchas. Then to analayze the differences between the current implementation and the ideal it just worked out. This often helps get it unblocked.

[+] glamp|3 months ago|reply
Hey Boris,

I couldn't agree more. And using Plan mode was a major breakthrough for me. Speaking of Plan Mode...

I was previously using it repeatedly in sessions (and was getting great results). The most recent major release introduced this bug where it keeps referring back to the first plan you made in a session even when you're planning something else (https://github.com/anthropics/claude-code/issues/12505).

I find this bug incredibly confusing. Am I using Plan Mode in a really strange way? Because for me this is a showstopper bug–my core workflow is broken. I assume I'm using Claude Code abnormally otherwise this bug would be a bigger issue.

[+] malloc2048|3 months ago|reply
Thank you for Claude Code (Web). Google has a similar offering with Google Jules. I got really, really bad results from Jules and was amazed by Claude Code when I finally discovered it.

I compared both with the same set of prompts and Claude Code seemed to be a senior expert developer and Jules, well don't know who be that bad ;-)

Anyway, I also wanted to have persistent information, so I don't have to feed Claude Code the same stuff over and over again. I was looking for similar functionality as Claude projects. But that's not available for Claude Code Web.

So, I asked Claude what would be a way of achieving pretty the same as projects, and it told me to put all information I wanted to share in a file with the filename:.clinerules. Claude told me I should put that file in the root of my repository.

So please help me, is your recommendation the correct way of doing this, or did Claude give the correct answer?

Maybe you can clear that up by explaining the difference between the two files?

[+] moribvndvs|3 months ago|reply
Do you recommend having Claude dump your final plan into a document and having it execute from that piece by piece?

I feel like when I do plan mode (for CC and competing products), it seems good, but when I tell it to execute the output is not what we planned. I feel like I get slightly better results executing from a document in chunks (which of course necessitates building the iterative chunks into the plan).

[+] dotancohen|3 months ago|reply

  > I add to my team’s CLAUDE.md multiple times a week.
How big is that file now? How big is too big?
[+] Etheryte|3 months ago|reply
Hah, that's funny. Claude can't help but mess all the comments in the code up even if I explicitly tell it to not change any comments five times. That's literally the experience I had before opening this thread, never mind how often it completely ignores CLAUDE.md.
[+] cafebeen|3 months ago|reply
Thanks for your work great work on Claude Code!

One other feature with CLAUDE.md I’ve found useful is imports: prepending @ to a file name will force it to be imported into context. Otherwise, whether a file is read and loaded to context is dependent on tool use and planning by the agent (even with explicit instructions like “read file.txt”). Of course this means you have to be judicial with imports.

[+] dmd|3 months ago|reply
I would LOVE to use Opus 4.5, but it means I (a merely Pro peon) can work for maybe 30 minutes a day, instead of 60-90.
[+] sahilagarwal|3 months ago|reply
Hi Boris,

If you wouldn't mind answering a question for me, it's one of the main things that has made me not add claude in vscode.

I have a custom 'code style' system prompt that I want claude to use, and I have been able to add it when using claude in browser -

``` Beautiful is better than ugly. Explicit is better than implicit. Simple is better than complex. Complex is better than complicated. Readability counts. Special cases aren't special enough to break the rules. Although practicality beats purity. If the implementation is hard to explain, it's a bad idea. If the implementation is easy to explain, it may be a good idea.

Trust the context you're given. Don't defend against problems the human didn't ask you to solve. ```

How can I add it as a system prompt (or if its called something else) in vscode so LLMs adhere to it?

[+] mraza007|3 months ago|reply
+1 on that Opus 4.5 is a game changer I have used to refactor and modernize one of my old react project using bootstrap, You have to be really precise when prompting and having solid CLAUDE.md works really well
[+] idk1|3 months ago|reply
Hey there Boris from the Claude Code team! Thanks for these tips! Love Claude Code, absolutely one of the best pieces of software that has ever existed. What I would absolutely love is if the Claude documentation had examples of these because I see time and time again people saying what to do in the case you tell us to update the Claude MD with things that it gets wrong repeatedly but it's very rare to have examples just three or four examples of something gets got wrong, and then how you fixed it would be immensely helpful.
[+] jMyles|3 months ago|reply
3. Puppeteer? Or Playwright? I haven't been able to make Puppeteer work for the past 8 weeks or so ("failed to reconnect"). Do you have a doc on this?
[+] scellus|3 months ago|reply
In other words, permanent instructions and context well presented in *.md, planning and review before execution, agentic loops with feedback, and a good model.

You can do this with any agentic harness, just plain prompting and "LLM management skills". I don't have Claude Code at work, but all this applies to Codex and GH Copilot agents as well.

And agreed, Opus 4.5 is next level.

[+] matt3210|3 months ago|reply
I’ve yet to see any real work get done with agents. Can you share examples or videos of real production level work getting done? Maybe in a tutorial format?

My current understanding is that it’s for demos and toy projects

[+] goalieca|3 months ago|reply
> I add to my team’s CLAUDE.md multiple times a week.

This concerns me because fighting tooling is not a positive thing. It’s very negative and indicates how immature everything is.

[+] kidbomb|3 months ago|reply
Does the same happens if I create an AGENTS.md instead?
[+] kotatsu_dog|3 months ago|reply
In addition, Having Claude Code's code and plans evaluated is very valid. It makes calm decision for AI agents.
[+] kelvinjps10|3 months ago|reply
How do you make Claude code to choose opus and not sonnet? For me it seems to do it automatically
[+] emseetech|3 months ago|reply
Everyone's suggestions feel designed to frustrate me. Instructions on how to cajole and plead that seem more astrology than engineering.

This is the pattern I settled on about a year ago. I use it as a rubber-duck / conversation partner for bigger picture issues. I'll run my code through it as a sanity "pre-check" before a pr review. And I mapped autocomplete to ctrl-; in vim so I only bring it up when I need it.

Otherwise, I write everything myself. AI written code never felt safe. It adds velocity but velocity early on always steals speed from the future. That's been the case for languages, for frameworks, for libraries, it's no different for AI.

In other words, you get better at using AI for programming by recognizing where its strengths lie and going all in on those strengths. Don't twist up in knots trying to get it to do decently what you can already do well yourself.

[+] bogtog|3 months ago|reply
Using voice transcription is nice for fully expressing what you want, so the model doesn't need to make guesses. I'm often voicing 500-word prompts. If you talk in a winding way that looks awkward when in text, that's fine. The model will almost certainly be able to tell what you mean. Using voice-to-text is my biggest suggestion for people who want to use AI for programming

(I'm not a particularly slow typer. I can go 70-90 WPM on a typing test. However, this speed drops quickly when I need to also think about what I'm saying. Typing that fast is also kinda tiring, whereas talking/thinking at 100-120 WPM feels comfortable. In general, I think just this lowered friction makes me much more willing to fully describe what I want)

You can also ask it, "do you have any questions?" I find that saying "if you have any questions, ask me, otherwise go ahead and build this" rarely produces questions for me. However, if I say "Make a plan and ask me any questions you may have" then it usually has a few questions

I've also found a lot of success when I tell Claude Code to emulate on some specific piece of code I've previously written, either within the same project or something I've pasted in

[+] Marsymars|3 months ago|reply
> I'm not a particularly slow typer. I can go 70-90 WPM on a typing test. However, this speed drops quickly when I need to also think about what I'm saying. Typing that fast is also kinda tiring, whereas talking/thinking at 100-120 WPM feels comfortable.

This doesn't feel relatable at all to me. If my writing speed is bottlenecked by thinking about what I'm writing, and my talking speed is significantly faster, that just means I've removed the bottleneck by not thinking about what I'm saying.

[+] cjflog|3 months ago|reply
100% this, I built laboratory.love almost entirely with my voice and (now-outdated) Claude models

My go-to prompt finisher, which I have mapped to a hotkey due to frequent use, is "Before writing any code, first analyze the problem and requirements and identify any ambiguities, contradictions, or issues. Ask me to clarify any questions you have, and then we'll proceed to writing the code"

[+] Applejinx|3 months ago|reply
It's an AI. You might do better by phrasing it, 'Make a plan, and have questions'. There's nobody there, but if it's specifically directed to 'have questions' you might find they are good questions! Why are you asking, if you figure it'd be better to get questions? Just say to have questions, and it will.

It's like a reasoning model. Don't ask, prompt 'and here is where you come up with apropos questions' and you shall have them, possibly even in a useful way.

[+] dominotw|3 months ago|reply
surprised ai companies are not making this workflow possible instead of leaving it upto users to figure out how to get voice text into prompt.
[+] johnfn|3 months ago|reply
That's a fun idea. How do you get the transcript into Claude Code (or whatever you use)? What transcription service do you use?
[+] d4rkp4ttern|3 months ago|reply
> if you talk in a winding way …

My regular workflow is to talk (I use VoiceInk for transcription) and then say “tell me what you understood” — this puts your words into a well structured format, and you can also make sure the cli-agent got it, and expressing it explicitly likely also helps it stay on track.

[+] listic|3 months ago|reply
Thanks for the advice! Could you please share how did you enable voice transcription for your setup and what it actually is?
[+] j45|3 months ago|reply
Speech also uses a different part of the brain, and maybe less finger coordination.
[+] journal|3 months ago|reply
voice transcription is silly when someone is listening you talking to something that isn't exactly human, imagine explaining you were talking to AI. When it's more than one sentence I use voice too.
[+] serial_dev|3 months ago|reply
Here’s how I would do this task with cursor, especially if there are more routes.

I would open a chat and refactor the template together with cursor: I would tell it what I want and if I don’t like something, I would help it to understand what I like and why. Do this for one route and when you are ready, ask cursor to write a rules file based on the current chat that includes the examples that you wanted to change and some rationale as to why you wanted it that way.

Then in the next route, you can basically just say refactor and that’s it. Whenever you find something that you don’t like, tell it and remind cursor to also update the rules file.

[+] Frannky|3 months ago|reply
I see LLMs as searchers with the ability to change the data a little and stay in a valid space. If you think of them like searchers, it becomes automatic to make the search easy (small context, small precise questions), and you won't keep trying again and again if the code isn't working(no data in the training). Also, you will realize that if a language is not well represented in the training data, they may not work well.

The more specific and concise you are, the easier it will be for the searcher. Also, the less modification, the better, because the more you try to move away from the data in the training set, the higher the probability of errors.

I would do it like this:

1. Open the project in Zed 2. Add the Gemini CLI, Qwen code, or Claude to the agent system (use Gemini or Qwen if you want to do it for free, or Claude if you want to pay for it) 3. Ask it to correct a file (if the files are huge, it might be better to split them first) 4. Test if it works 5. If not, try feeding the file and the request to Grok or Gemini 3 Chat 6. If nothing works, do it manually

If instead you want to start something new, one-shot prompting can work pretty well, even for large tasks, if the data is in the training set. Ultimately, I see LLMs as a way to legally copy the code of other coders more than anything else

[+] mergesort|3 months ago|reply
I spend a lot of time teaching people AI by having them bring their own idea that we build it over a three-hour workshop. [1]

The workshop starts off with a very simple premise. I ask people to write their idea down in a Google Doc with all the details they need to hand it off to an AI, so the AI can build it autonomously.

What people discover is that communicating your idea is MUCH harder than they thought. They often write a few sentences or a paragraph, and I plainly ask them "if you gave this to a junior developer do you think they'd be able to build your idea?" They say of course not, and we try again.

We do a v2, a v3, a v4, and so on, while we talk through their ideas, develop new ideas ideas to improve their prompt, and I teach them about how AI can make this process easier. The workshop goes on and on and on like this, until we have a page or two of context. Finally we can hand the idea off to AI, and boom — a few minutes later they either have their idea or they have something we can quickly mold into their vision.

This part of the process is where I think most people struggle. People think they're good communicators, but they only realize how much work it is to communicate their ideas once they are faced with the prospect of clearly describing their problem and writing it down in front of another person.

I say this not to try to shill my workshops, but to say that the results are spectacular for a simple reason. Describing the problem well is 80% of the work, but once you do that and do it well — AI can take over and do a genuinely excellent job.

I often joke at the end of my workshops that I call these AI workshops, but it's effectively a three hour workshop on communication. Most software developers wouldn't pay much for a communication workshop even if it makes them more effective at using tools like Claude Code, Codex, or even vibe coding, so I wrap everything up in a neatly AI sandwich. :)

[1] https://build.ms/ai

[+] theahura|3 months ago|reply
Soft plug: take a look at https://github.com/tilework-tech/nori-profiles

I've spent the last ~4 months figuring out how to make coding agents better, and it's really paid off. The configs at the link above make claude code significantly better, passively. It's a one-shot install, and it may just be able to one-shot your problem, because it does the hard work of 'knowing how to use the agents' for you. Would love to know if you try it out and have any feedback.

(In case anyone is curious, I wrote about these configs and how they work here: https://12gramsofcarbon.com/p/averaging-10-prs-a-day-with-cl...

and I used those configs to get to the top of HN with SpaceJam here: https://news.ycombinator.com/item?id=46193412)

[+] justatdotin|3 months ago|reply
what really got me moving was dusting off some old text about cognitive styles and team work. Learning to treat agents like a new team-member with extreme tendencies. Learning to observe both my practices and the agents' in order to understand one another's strengths and weaknesses, indicating how we might work better together.

I think this perspective also goes a long way to understanding the very different results different devs get from these tools.

my main approach to quality is to focus agent power on all that code which I do not care about the beauty of: problems with verifiable solutions, experiments, disposable computation. eg my current projects are build/deploy tools, and I need sample projects to build/deploy. I never even reviewed the sample projects' code: so long as they hit the points we are testing.

svelte does not really resonate with me, so I don't know it well, but I suspect there should be good opportunities for TDD in this rewrite. not the project unit tests, just disposable test scripts that guide and constrain new dev work.

you are right to notice that it is not working for you, and at this stage sometimes the correct way to get in sync with the agents is to start again, without previous missteps to poison the workspace. There's good advice in this thread, you might like to experiment with good advice on a clean slate.

[+] jdelsman|3 months ago|reply
My favorite set of tools to use with Claude Code right now: https://github.com/obra/superpowers

1. Start with the ‘brainstorm’ session where you explain your feature or the task that you're trying to complete. 2. Allow it to write up a design doc, then an implementation plan - both saved to disk - by asking you multiple clarifying questions. Feel free to use voice transcription for this because it is probably as good as typing, if not better. 3. Open up a new Claude window and then use a git worktree with the Execute Plan command. This will essentially build out in multiple steps, committing after about three tasks. What I like to do is to have it review its work after three tasks as well so that you get easier code review and have a little bit more confidence that it's doing what you want it to do.

Overall, this hasn't really failed me yet and I've been using it now for two weeks and I've used about, I don't know, somewhere in the range of 10 million tokens this week alone.

[+] rdrd|3 months ago|reply
First you have to be very specific with what you mean by idiomatic code - what’s idiomatic for you is not idiomatic for an LLM. Personally I would approach it like this:

1) Thoroughly define step-by-step what you deem to be the code convention/style you want to adhere to and steps on how you (it) should approach the task. Do not reference entire files like “produce it like this file”, it’s too broad. The document should include simple small examples of “Good” and “Bad” idiomatic code as you deem it. The smaller the initial step-by-step guide and code conventions the better, context is king with LLMs and you need to give it just enough context to work with but not enough it causes confusion.

2) Feed it to Opus 4.5 in planning mode and ask it to follow up with any questions or gaps and have it produce a final implementation plan.md. Review this, tweak it, remove any fluff and get it down to bare bones.

3) Run the plan.md through a fresh Agentic session and see what the output is like. Where it’s not quite correct add those clarifications and guardrails into the original plan.md and go again with step 3.

What I absolutely would NOT do is ask for fixes or changes if it does not one-shot it after the first go. I would revise plan.md to get it into a state where it gets you 99% of the way there in the first go and just do final cleanup by hand. You will bang your head against the wall attempting to guide it like you would a junior developer (at least for something like this).

[+] dboon|3 months ago|reply
AI programming, for me, is just a few simple rules:

1. True vibe coding (one-shot, non-trivial, push to master) does not work. Do not try it.

2. Break your task into verifiable chunks. Work with Claude to this end.

3. Put the entire plan into a Markdown file; it should be as concise as possible. You need a summary of the task; individual problems to solve; references to files and symbols in the source code; a work list, separated by verification points. Seriously, less is more.

4. Then, just loop: Start a new session. Ask it to implement the next phase. Read the code, ask for tweaks. Commit when you're happy.

Seriously, that's it. Anything more than that is roleplaying. Anything less is not engineering. Keep a list in the Markdown file of amendments; if it keeps messing the same thing up, add one line to the list.

To hammer home the most important pieces:

- Less is more. LLMs are at their best with a fresh context window. Keep one file. Something between 500 and 750 words (checking a recent one, I have 555 words / 4276 characters). If that's not sufficient, the task is too big.

- Verifiable chunks. It must be verifiable. There is no other way. It could be unit tests; print statements; a tmux session. But it must be verifiable.

[+] vaibhavgeek|3 months ago|reply
This may sound strange but here is how I define my flow.

1. Switch off your computer.

2. Go to a nice Park.

3. Open notebook and pen, and write prompts that are 6-8 lines long on what task you want to achieve, use phone to google specific libraries.

4. Come back to your PC, type those prompts in with Plan mode and ask for exact code changes claude is going to make.

5. Review and push PR.

6. Wait for your job to be automated.

[+] bikeshaving|3 months ago|reply
You know when Claude Code for Terminal starts scroll-looping and doom-scrolling through the entire conversation in an uninterruptible fashion? Just try reading as much as of it as you can. It strengthens your ability to read code in an instant and keeps you alert. And if people watch you pretend to understand your screen, it makes you look like a mentat.

It’s actually a feature, not a bug.

[+] __mharrison__|3 months ago|reply
I have a whole workflow for coding with agents.

Get very good at context management (updating AGENTS.md, starting new session, etc).

Embrace TDD. It might have been annoying when Extreme Programming came out 25 years ago, but now that agents can type a lot faster than us, it's an awesome tool for putting guardrails around the agent.

(I teach workshops on best practices for agentic coding)

[+] PostOnce|3 months ago|reply
If anyone knew the answer to this question, Anthropic would be profitable.

Currently they project they might break even in 2028.

That means that right now, every time you ask an AI a question, someone loses money.

That of course means no-one knows if you can get better at AI programming, and the answer may be "you can't."

Only time will tell.

[+] firefax|3 months ago|reply
How did you learn how to use AI for coding? I'm open to the idea that a lot of "software carpentry" tasks (moving/renaming files, basic data analysis, etc) can be done with AI to free up time for higher level analysis, but I have no idea where to begin -- my focus many years ago was privacy, so I lean towards doing everything locally or hosted on a server I control so I lack a lot of knowledge of "the cloud" my HN betheren have.
[+] realberkeaslan|3 months ago|reply
Consider giving Cursor a try. I personally like the entire UI/UX, their agent has good context, and the entire experience overall is just great. The team has done a phenomenal job. Your workflow could look something like this:

1. Prompt the agent

2. The agent gets too work

3. Review the changes

4. Repeat

This can speed up your process significantly, and the UI clearly shows the changes + some other cool features

EDIT: from reading your post again, I think you could benefit primarily from a clear UI with the adjusted code, which Cursor does very well.

[+] whatever1|3 months ago|reply
For me what vastly improved the usefulness when working with big json responses was to install jq in my system and tell the llm to use jq to explore the json, instead of just trying to ingest it all together. For other things I explicitly ask it to write a script to achieve something instead of doing it directly.
[+] nextaccountic|3 months ago|reply
About Svelte, on the svelte subreddit it was reported that GPT 5.2 is better at Svelte, perhaps because it has a more recent knowledge cutoff

But anyway you should set up the Svelte MCP

[+] justinzollars|3 months ago|reply
Start with a problem. I'm building https://helppet.ai, a voice agent for Veterinarians. I didn't know anything about AI programming other than absolute fundamentals I learned in 2017 through Stanford AI course - this was just theory and a lot has changed. I followed startups doing solving similar problems, and I asked detailed questions to everyone I could. I went to AI hack events to learn techniques others are using. Eventually I ended up with something pretty great and had fun doing so. So start with a problem and then work backwards.