Ask HN: How can I get better at using AI for programming?
471 points| lemonlime227 | 3 months ago | reply
I've had a fairly steady process for doing this: look at each route defined in Django, build out my `+page.server.ts`, and then split each major section of the page into a Svelte component with a matching Storybook story. It takes a lot of time to do this, since I have to ensure I'm not just copying the template but rather recreating it in a more idiomatic style.
This kind of work seems like a great use case for AI assisted programming, but I've failed to use it effectively. At most, I can only get Claude Code to recreate some slightly less spaghetti code in Svelte. Simple prompting just isn't able to get AI's code quality within 90% of what I'd write by hand. Ideally, AI could get it's code to something I could review manually in 15-20 minutes, which would massively speed up the time spent on this project (right now it takes me 1-2 hours to properly translate a route).
Do you guys have tips or suggestions on how to improve my efficiency and code quality with AI?
[+] [-] bcherny|3 months ago|reply
1. If there is anything Claude tends to repeatedly get wrong, not understand, or spend lots of tokens on, put it in your CLAUDE.md. Claude automatically reads this file and it’s a great way to avoid repeating yourself. I add to my team’s CLAUDE.md multiple times a week.
2. Use Plan mode (press shift-tab 2x). Go back and forth with Claude until you like the plan before you let Claude execute. This easily 2-3x’s results for harder tasks.
3. Give the model a way to check its work. For svelte, consider using the Puppeteer MCP server and tell Claude to check its work in the browser. This is another 2-3x.
4. Use Opus 4.5. It’s a step change from Sonnet 4.5 and earlier models.
Hope that helps!
[+] [-] epolanski|3 months ago|reply
Sure, for 4/5 interactions then will ignore those completely :)
Try for yourself: add to CLAUDE.md an instruction to always refer to you as Mr. bcherny and it will stop very soon. Coincidentally at that point also loses tracks of all the other instructions.
[+] [-] keepamovin|3 months ago|reply
Some things I found from my own interactions across multiple models (in addition to above):
- It's basically all about the importance of (3). You need a feedback loop (we all do). and the best way is for it to change things and see the effects (ideally also against a good baseline like a test suite where it can roughly guage how close or far it is from the goal.) For assembly, a debugger/tracer works great (using batch-mode or scripts as models/tooling often choke on such interactivie TUI io).
- If it keeps missing the mark tell it to decorate the code with a file log recording all the info it needs to understand what's happening. Its analysis of such logs normally zeroes the solution pretty quickly, especially for complex tasks.
- If it's really struggling, tell it to sketch out a full plan in pseudocode, and explain why that will work, and analyze for any gotchas. Then to analayze the differences between the current implementation and the ideal it just worked out. This often helps get it unblocked.
[+] [-] glamp|3 months ago|reply
I couldn't agree more. And using Plan mode was a major breakthrough for me. Speaking of Plan Mode...
I was previously using it repeatedly in sessions (and was getting great results). The most recent major release introduced this bug where it keeps referring back to the first plan you made in a session even when you're planning something else (https://github.com/anthropics/claude-code/issues/12505).
I find this bug incredibly confusing. Am I using Plan Mode in a really strange way? Because for me this is a showstopper bug–my core workflow is broken. I assume I'm using Claude Code abnormally otherwise this bug would be a bigger issue.
[+] [-] malloc2048|3 months ago|reply
I compared both with the same set of prompts and Claude Code seemed to be a senior expert developer and Jules, well don't know who be that bad ;-)
Anyway, I also wanted to have persistent information, so I don't have to feed Claude Code the same stuff over and over again. I was looking for similar functionality as Claude projects. But that's not available for Claude Code Web.
So, I asked Claude what would be a way of achieving pretty the same as projects, and it told me to put all information I wanted to share in a file with the filename:.clinerules. Claude told me I should put that file in the root of my repository.
So please help me, is your recommendation the correct way of doing this, or did Claude give the correct answer?
Maybe you can clear that up by explaining the difference between the two files?
[+] [-] moribvndvs|3 months ago|reply
I feel like when I do plan mode (for CC and competing products), it seems good, but when I tell it to execute the output is not what we planned. I feel like I get slightly better results executing from a document in chunks (which of course necessitates building the iterative chunks into the plan).
[+] [-] dotancohen|3 months ago|reply
[+] [-] tlarkworthy|3 months ago|reply
https://gist.github.com/a-c-m/f4cead5ca125d2eaad073dfd71efbc...
That will moves stuff that required manually clarifying back into the claude.md (or a useful subset you pick). It does a much better job of authoring claude.md than I do.
[+] [-] Etheryte|3 months ago|reply
[+] [-] cafebeen|3 months ago|reply
One other feature with CLAUDE.md I’ve found useful is imports: prepending @ to a file name will force it to be imported into context. Otherwise, whether a file is read and loaded to context is dependent on tool use and planning by the agent (even with explicit instructions like “read file.txt”). Of course this means you have to be judicial with imports.
[+] [-] dmd|3 months ago|reply
[+] [-] sahilagarwal|3 months ago|reply
If you wouldn't mind answering a question for me, it's one of the main things that has made me not add claude in vscode.
I have a custom 'code style' system prompt that I want claude to use, and I have been able to add it when using claude in browser -
``` Beautiful is better than ugly. Explicit is better than implicit. Simple is better than complex. Complex is better than complicated. Readability counts. Special cases aren't special enough to break the rules. Although practicality beats purity. If the implementation is hard to explain, it's a bad idea. If the implementation is easy to explain, it may be a good idea.
Trust the context you're given. Don't defend against problems the human didn't ask you to solve. ```
How can I add it as a system prompt (or if its called something else) in vscode so LLMs adhere to it?
[+] [-] mraza007|3 months ago|reply
[+] [-] idk1|3 months ago|reply
[+] [-] jMyles|3 months ago|reply
[+] [-] scellus|3 months ago|reply
You can do this with any agentic harness, just plain prompting and "LLM management skills". I don't have Claude Code at work, but all this applies to Codex and GH Copilot agents as well.
And agreed, Opus 4.5 is next level.
[+] [-] matt3210|3 months ago|reply
My current understanding is that it’s for demos and toy projects
[+] [-] goalieca|3 months ago|reply
This concerns me because fighting tooling is not a positive thing. It’s very negative and indicates how immature everything is.
[+] [-] kidbomb|3 months ago|reply
[+] [-] kotatsu_dog|3 months ago|reply
[+] [-] kelvinjps10|3 months ago|reply
[+] [-] emseetech|3 months ago|reply
This is the pattern I settled on about a year ago. I use it as a rubber-duck / conversation partner for bigger picture issues. I'll run my code through it as a sanity "pre-check" before a pr review. And I mapped autocomplete to ctrl-; in vim so I only bring it up when I need it.
Otherwise, I write everything myself. AI written code never felt safe. It adds velocity but velocity early on always steals speed from the future. That's been the case for languages, for frameworks, for libraries, it's no different for AI.
In other words, you get better at using AI for programming by recognizing where its strengths lie and going all in on those strengths. Don't twist up in knots trying to get it to do decently what you can already do well yourself.
[+] [-] bogtog|3 months ago|reply
(I'm not a particularly slow typer. I can go 70-90 WPM on a typing test. However, this speed drops quickly when I need to also think about what I'm saying. Typing that fast is also kinda tiring, whereas talking/thinking at 100-120 WPM feels comfortable. In general, I think just this lowered friction makes me much more willing to fully describe what I want)
You can also ask it, "do you have any questions?" I find that saying "if you have any questions, ask me, otherwise go ahead and build this" rarely produces questions for me. However, if I say "Make a plan and ask me any questions you may have" then it usually has a few questions
I've also found a lot of success when I tell Claude Code to emulate on some specific piece of code I've previously written, either within the same project or something I've pasted in
[+] [-] Marsymars|3 months ago|reply
This doesn't feel relatable at all to me. If my writing speed is bottlenecked by thinking about what I'm writing, and my talking speed is significantly faster, that just means I've removed the bottleneck by not thinking about what I'm saying.
[+] [-] cjflog|3 months ago|reply
My go-to prompt finisher, which I have mapped to a hotkey due to frequent use, is "Before writing any code, first analyze the problem and requirements and identify any ambiguities, contradictions, or issues. Ask me to clarify any questions you have, and then we'll proceed to writing the code"
[+] [-] Applejinx|3 months ago|reply
It's like a reasoning model. Don't ask, prompt 'and here is where you come up with apropos questions' and you shall have them, possibly even in a useful way.
[+] [-] dominotw|3 months ago|reply
[+] [-] johnfn|3 months ago|reply
[+] [-] d4rkp4ttern|3 months ago|reply
My regular workflow is to talk (I use VoiceInk for transcription) and then say “tell me what you understood” — this puts your words into a well structured format, and you can also make sure the cli-agent got it, and expressing it explicitly likely also helps it stay on track.
[+] [-] unknown|3 months ago|reply
[deleted]
[+] [-] listic|3 months ago|reply
[+] [-] j45|3 months ago|reply
[+] [-] journal|3 months ago|reply
[+] [-] serial_dev|3 months ago|reply
I would open a chat and refactor the template together with cursor: I would tell it what I want and if I don’t like something, I would help it to understand what I like and why. Do this for one route and when you are ready, ask cursor to write a rules file based on the current chat that includes the examples that you wanted to change and some rationale as to why you wanted it that way.
Then in the next route, you can basically just say refactor and that’s it. Whenever you find something that you don’t like, tell it and remind cursor to also update the rules file.
[+] [-] Frannky|3 months ago|reply
The more specific and concise you are, the easier it will be for the searcher. Also, the less modification, the better, because the more you try to move away from the data in the training set, the higher the probability of errors.
I would do it like this:
1. Open the project in Zed 2. Add the Gemini CLI, Qwen code, or Claude to the agent system (use Gemini or Qwen if you want to do it for free, or Claude if you want to pay for it) 3. Ask it to correct a file (if the files are huge, it might be better to split them first) 4. Test if it works 5. If not, try feeding the file and the request to Grok or Gemini 3 Chat 6. If nothing works, do it manually
If instead you want to start something new, one-shot prompting can work pretty well, even for large tasks, if the data is in the training set. Ultimately, I see LLMs as a way to legally copy the code of other coders more than anything else
[+] [-] mergesort|3 months ago|reply
The workshop starts off with a very simple premise. I ask people to write their idea down in a Google Doc with all the details they need to hand it off to an AI, so the AI can build it autonomously.
What people discover is that communicating your idea is MUCH harder than they thought. They often write a few sentences or a paragraph, and I plainly ask them "if you gave this to a junior developer do you think they'd be able to build your idea?" They say of course not, and we try again.
We do a v2, a v3, a v4, and so on, while we talk through their ideas, develop new ideas ideas to improve their prompt, and I teach them about how AI can make this process easier. The workshop goes on and on and on like this, until we have a page or two of context. Finally we can hand the idea off to AI, and boom — a few minutes later they either have their idea or they have something we can quickly mold into their vision.
This part of the process is where I think most people struggle. People think they're good communicators, but they only realize how much work it is to communicate their ideas once they are faced with the prospect of clearly describing their problem and writing it down in front of another person.
I say this not to try to shill my workshops, but to say that the results are spectacular for a simple reason. Describing the problem well is 80% of the work, but once you do that and do it well — AI can take over and do a genuinely excellent job.
I often joke at the end of my workshops that I call these AI workshops, but it's effectively a three hour workshop on communication. Most software developers wouldn't pay much for a communication workshop even if it makes them more effective at using tools like Claude Code, Codex, or even vibe coding, so I wrap everything up in a neatly AI sandwich. :)
[1] https://build.ms/ai
[+] [-] theahura|3 months ago|reply
I've spent the last ~4 months figuring out how to make coding agents better, and it's really paid off. The configs at the link above make claude code significantly better, passively. It's a one-shot install, and it may just be able to one-shot your problem, because it does the hard work of 'knowing how to use the agents' for you. Would love to know if you try it out and have any feedback.
(In case anyone is curious, I wrote about these configs and how they work here: https://12gramsofcarbon.com/p/averaging-10-prs-a-day-with-cl...
and I used those configs to get to the top of HN with SpaceJam here: https://news.ycombinator.com/item?id=46193412)
[+] [-] justatdotin|3 months ago|reply
I think this perspective also goes a long way to understanding the very different results different devs get from these tools.
my main approach to quality is to focus agent power on all that code which I do not care about the beauty of: problems with verifiable solutions, experiments, disposable computation. eg my current projects are build/deploy tools, and I need sample projects to build/deploy. I never even reviewed the sample projects' code: so long as they hit the points we are testing.
svelte does not really resonate with me, so I don't know it well, but I suspect there should be good opportunities for TDD in this rewrite. not the project unit tests, just disposable test scripts that guide and constrain new dev work.
you are right to notice that it is not working for you, and at this stage sometimes the correct way to get in sync with the agents is to start again, without previous missteps to poison the workspace. There's good advice in this thread, you might like to experiment with good advice on a clean slate.
[+] [-] jdelsman|3 months ago|reply
1. Start with the ‘brainstorm’ session where you explain your feature or the task that you're trying to complete. 2. Allow it to write up a design doc, then an implementation plan - both saved to disk - by asking you multiple clarifying questions. Feel free to use voice transcription for this because it is probably as good as typing, if not better. 3. Open up a new Claude window and then use a git worktree with the Execute Plan command. This will essentially build out in multiple steps, committing after about three tasks. What I like to do is to have it review its work after three tasks as well so that you get easier code review and have a little bit more confidence that it's doing what you want it to do.
Overall, this hasn't really failed me yet and I've been using it now for two weeks and I've used about, I don't know, somewhere in the range of 10 million tokens this week alone.
[+] [-] rdrd|3 months ago|reply
1) Thoroughly define step-by-step what you deem to be the code convention/style you want to adhere to and steps on how you (it) should approach the task. Do not reference entire files like “produce it like this file”, it’s too broad. The document should include simple small examples of “Good” and “Bad” idiomatic code as you deem it. The smaller the initial step-by-step guide and code conventions the better, context is king with LLMs and you need to give it just enough context to work with but not enough it causes confusion.
2) Feed it to Opus 4.5 in planning mode and ask it to follow up with any questions or gaps and have it produce a final implementation plan.md. Review this, tweak it, remove any fluff and get it down to bare bones.
3) Run the plan.md through a fresh Agentic session and see what the output is like. Where it’s not quite correct add those clarifications and guardrails into the original plan.md and go again with step 3.
What I absolutely would NOT do is ask for fixes or changes if it does not one-shot it after the first go. I would revise plan.md to get it into a state where it gets you 99% of the way there in the first go and just do final cleanup by hand. You will bang your head against the wall attempting to guide it like you would a junior developer (at least for something like this).
[+] [-] dboon|3 months ago|reply
1. True vibe coding (one-shot, non-trivial, push to master) does not work. Do not try it.
2. Break your task into verifiable chunks. Work with Claude to this end.
3. Put the entire plan into a Markdown file; it should be as concise as possible. You need a summary of the task; individual problems to solve; references to files and symbols in the source code; a work list, separated by verification points. Seriously, less is more.
4. Then, just loop: Start a new session. Ask it to implement the next phase. Read the code, ask for tweaks. Commit when you're happy.
Seriously, that's it. Anything more than that is roleplaying. Anything less is not engineering. Keep a list in the Markdown file of amendments; if it keeps messing the same thing up, add one line to the list.
To hammer home the most important pieces:
- Less is more. LLMs are at their best with a fresh context window. Keep one file. Something between 500 and 750 words (checking a recent one, I have 555 words / 4276 characters). If that's not sufficient, the task is too big.
- Verifiable chunks. It must be verifiable. There is no other way. It could be unit tests; print statements; a tmux session. But it must be verifiable.
[+] [-] vaibhavgeek|3 months ago|reply
1. Switch off your computer.
2. Go to a nice Park.
3. Open notebook and pen, and write prompts that are 6-8 lines long on what task you want to achieve, use phone to google specific libraries.
4. Come back to your PC, type those prompts in with Plan mode and ask for exact code changes claude is going to make.
5. Review and push PR.
6. Wait for your job to be automated.
[+] [-] bikeshaving|3 months ago|reply
It’s actually a feature, not a bug.
[+] [-] __mharrison__|3 months ago|reply
Get very good at context management (updating AGENTS.md, starting new session, etc).
Embrace TDD. It might have been annoying when Extreme Programming came out 25 years ago, but now that agents can type a lot faster than us, it's an awesome tool for putting guardrails around the agent.
(I teach workshops on best practices for agentic coding)
[+] [-] PostOnce|3 months ago|reply
Currently they project they might break even in 2028.
That means that right now, every time you ask an AI a question, someone loses money.
That of course means no-one knows if you can get better at AI programming, and the answer may be "you can't."
Only time will tell.
[+] [-] firefax|3 months ago|reply
[+] [-] realberkeaslan|3 months ago|reply
1. Prompt the agent
2. The agent gets too work
3. Review the changes
4. Repeat
This can speed up your process significantly, and the UI clearly shows the changes + some other cool features
EDIT: from reading your post again, I think you could benefit primarily from a clear UI with the adjusted code, which Cursor does very well.
[+] [-] whatever1|3 months ago|reply
[+] [-] nextaccountic|3 months ago|reply
But anyway you should set up the Svelte MCP
[+] [-] justinzollars|3 months ago|reply