As the models have progressively improved (able to handle more complex code bases, longer files, etc) I’ve started using this simple framework on repeat which seems to work pretty well at one shorting complex fixes or new features.
[Research] ask the agent to explain current functionality as a way to load the right files into context.
[Plan] ask the agent to brainstorm the best practices way to implement a new feature or refactor. Brainstorm seems to be a keyword that triggers a better questioning loop for the agent. Ask it to write a detailed implementation plan to an md file.
[clear] completely clear the context of the agent —- better results than just compacting the conversation.
[execute plan] ask the agent to review the specific plan again, sometimes it will ask additional questions which repeats the planning phase again. This loads only the plan into context and then have it implement the plan.
[review & test] clear the context again and ask it to review the plan to make sure everything was implemented. This is where I add any unit or integration tests if needed. Also run test suites, type checks, lint, etc.
With this loop I’ve often had it run for 20-30 minutes straight and end up with usable results. It’s become a game of context management and creating a solid testing feedback loop instead of trying to purely one-shot issues.
As of Dec 2025, Sonnet/Opus and GPTCodex are both trained and most good agent tools (ie. opencode, claude-code, codex) have prompts to fire off subagents during an exploration (use the word explore) and you should be able to Research without needing the extra steps of writing plans and resetting context. I'd save that expense unless you need some huge multi-step verifiable plan implemented.
The biggest gotcha I found is that these LLMs love to assume that code is C/Python but just in your favorite language of choice. Instead of considering that something should be written encapsulated into an object to maintain state, it will instead write 5 functions, passing the state as parameters between each function. It will also consistently ignore most of the code around it, even if it could benefit from reading it to know what specifically could be reused. So you end up with copy-pasta code, and unstructured copy-pasta at best.
The other gotcha is that claude usually ignores CLAUDE.md. So for me, I first prompt it to read it and then I prompt it to next explore. Then, with those two rules, it usually does a good job following my request to fix, or add a new feature, or whatever, all within a single context. These recent agents do a much better job of throwing away useless context.
I do think the older models and agents get better results when writing things to a plan document, but I've noticed recent opus and sonnet usually end up just writing the same code to the plan document anyway. That usually ends up confusing itself because it can't connect it to the code around the changes as easily.
Nothing will really work when the models fail at the most basic of reasoning challenges.
I've had models do the complete opposite of what I've put in the plan and guidelines. I've had them go re-read the exact sentences, and still see them come to the opposite conclusion, and my instructions are nothing complex at all.
I used to think one could build a workflow and process around LLMs that extract good value from them consistently, but I'm now not so sure.
I notice that sometimes the model will be in a good state, and do a long chain of edits of good quality. The problem is, it's still a crap-shoot how to get them into a good state.
We've taken those prompts, tweaked them to be more relevant to us and our stack, and have pulled them in as custom commands that can be executed in Claude Code, i.e. `/research_codebase`, `/create_plan`, and `/implement_plan`.
It's working exceptionally well for me, it helps that I'm very meticulous about reviewing the output and correcting it during the research and planning phase. Aside from a few use cases with mixed results, it hasn't really taken off throughout our team unfortunately.
I don't do any of that. I find with GitHub copilot and Claude sonnet 4.5 if I'm clear enough about the what and where it'll sort things out pretty well, and then there's only reiteration of code styling or reuse of functionality. At that point it has enough context to keep going. The only time I might clear that whole thing is if I'm working on an entirely new feature where the context is too large and it gets stuck in summarising the history. Otherwise it's good. But this in codespaces. I find the Tasks feature much harder. Almost a write-off when trying to do something big. Twice I've had it go off on some strange tangent and build the most absurd thing. You really need to keep your eyes on it.
This is essentially my exact workflow. I also keep the plan markdown files around in the repo to refer agents back to when adding new features. I have found it to be a really effective loop, and a great way to reprime context when returning to features.
I’m uneasy having an agent implement several pages of plan and then writing tests and results only at the and of all that. It feels like getting a CS student to write and follow a plan to do something they haven’t worked on before.
It’ll report, “Numbers changed in step 6a therefore it worked” [forgetting the pivotal role of step 2 which failed and as a result the agent should have taken step 6b, not 6a].
Or “there is conclusive evidence that X is present and therefore we were successful” [X is discussed in the plan as the reason why action is NEEDED, not as success criteria].
I _think _ that what is going wrong is context overload and my remedy is to have the agent update every step of the plan with results immediately after action and before moving on to action on the next step.
When things seem off I can then clear context and have the agent review results step by step to debug its own work: “review step 2 of the results. Are the stated results confident with final conclusions? Quote lines from the results verbatim as evidence.”
Highly recommend using agent based hooks for things like `[review & test]`.
At a basic level, they work akin to git-hooks, but they fire up a whole new context whenever certain events trigger (E.g. another agent finishes implementing changes) - and that hook instance is independent of the implementation context (which is great, as for the review case it is a semi-independent reviewer).
I agree this can work okay, but once I find myself doing this much handholding I would prefer to drive the process myself. Coordinating 4 agents and guiding them along really makes you appreciate the mythical-man-month on the scale of hours.
> Making a prompt library useful requires iteration. Every time the LLM is slightly off target, ask yourself, "What could've been clarified?" Then, add that answer back into the prompt library.
I'm far from an LLM power user, but this is the single highest ROI practice I've been using.
You have to actually observe what the LLM is trying to do each time. Simply smashing enter over and over again or setting it to auto-accept everything will just burn tokens. Instead, see where it gets stuck and add a short note to CLAUDE.md or equivalent. Break it out into sub-files to open for different types of work if the context file gets large.
Letting the LLM churn and experiment for every single task will make your token quota evaporate before your eyes. Updating the context file constantly is some extra work for you, but it pays off.
My primary use case for LLMs is exploring code bases and giving me summaries of which files to open, tracing execution paths through functions, and handing me the info I need. It also helps a lot to add some instructions for how to deliver useful results for specific types of questions.
I'm with you on that, but I have to say I have been doing that aggressively, and it's pretty easy for Claude Code at least to ignore the prompts, commands, Markdown files, README, architecture docs, etc.
I feel like I spend quite a bit of time telling the thing to look at information it already knows. And I'm talking about when I HAVE actually created various documents to use and prompts.
As a specific example, it regularly just doesn't reference CLAUDE.md and it seems pretty random as to when it decides to drop that out of context. That's including right at session start when it should have it fresh.
> Every time the LLM is slightly off target, ask yourself, "What could've been clarified?
Better than that, ask the LLM. Better than that, have the LLM ask itself. You do still have make sure it doesn't go off the rails, but the LLM itself wrote this to help answer the question:
### Pattern 10: Student Pattern (Fresh Eyes)
*Concept:* Have a sub-agent read documentation/code/prompts "as a newcomer" to find gaps, contradictions, and confusion points that experts miss.
*Why it works:* Developers write with implicit knowledge they don't realize is missing. A "student" perspective catches assumptions, undefined terms, and inconsistencies.
Pretend you are a NEW AI agent who has never seen this codebase.
Read these docs as if encountering them for the first time:
1. CLAUDE.md
2. SUB_AGENT_QUICK_START.md
Then answer from a fresh perspective:
## Confusion Points
- What was confusing or unclear on first read?
- What terms are used without explanation?
## Contradictions
- Where do docs disagree with each other?
- What's inconsistent?
## Missing Information
- What would a new agent need to know that isn't covered?
## Recommendations
- Concrete edits to improve clarity
Be honest and critical. Include file:line references."
```
*Uses cases:* Before finalizing new documentation, evaluating prompts for future Agents.
I'm interested to see where we'll land re: organizing larger codebases to accommodate agents.
I've been having a lot of fun taking my larger projects and decomposing them into directed graphs where the nodes are nix flakes. If I launch claude code in a flake devshell it has access to only those tools, and it sees the flake.nix and assumes that the project is bounded by the CWD even though it's actually much larger, so its context is small and it doesn't get overwhelmed.
Inputs/outputs are a nice language agnostic mechanism for coordinating between flakes (just gotta remember to `nix flake update --update-input` when you want updated outputs from an adjacent flake). Then I can have them write feature requests for each other and help each other test fixtures and features. I also like watching them debate over a design, they get lazy and assume the other "team" will do the work, but eventually settle on something reasonable.
I've been running with the idea for a few weeks, maybe it's dumb, but I'd be surprised if this kind of rethinking didn't eventually yield a radical shift in how we organize code, even if the details look nothing like what I've come up with. Somehow we gotta get good at partitioning context so we can avoid the worst parts of the exponential increase in token volume that comes from submitting the entire chat session history just to get the next response.
Id be keen to read/hear more about the experiment you've been undertaking as I too have been thinking the impact on the design/architecture/organising of software.
The focus mainly seems to be on enhancing existing workflows to produce code we currently expect - often you hear its like a junior dev.
The type of rethinking you outlined could have code organised in such a way a junior dev would never be able to extend but our 'junior dev' LLM can iterate through changes easily.
I care more about the properties of software e.g. testable, extendable, secure than how it organised.
Gets me to think of questions like
- what is the correlation between how code is organised vs its properties?
- what is the optimal organisation of code to facilitate llms to modify and extend software?
yeah this is an interesting approach, both for the context-partitioning but also for reproducibility and dependency pinning. i was toying with this before needing to run with just docker on a project. would be nice to find a tool that streamlines some of this
LLMs are so good at telling me about things I know little to nothing about, but when when I ask about things I have expert knowledge on they consistently fail, hallucinate, and confidently lie...
I’ve found that they vary a huge amount based on the subject matter. In my case, I have noticed the opposite of what you observed. They know a lot about the web space (which I’ve been in for around 25 years), but are pretty bad (though not useless) at esoteric languages such as Hare.
I think you end up asking it basic questions about stuff you know little about, but much more complex/difficult questions for stuff you're already an expert in.
I have a somewhat different take on this (somewhat captured in the post linked below).
IMO, the best way to raise the floor of LLM performance in codebases is by building meaning into the code base itself ala DDD. If your codebase is hard to understand and grok for a human, it will be the same for an LLM. If your codebase is unstructured and has no definable patterns, it will be harder for an LLM to use.
You can try to overcome this with even more tooling and more workflows but IMO, it is throwing good money after bad. it is ironic and maybe unpopular, but it turns out LLMs prove that all the folks yapping about language and meaning (re: DDD) were right.
Great post. I work on two large codebases. One is structured much like the example from the post, and the other is a mess. LLMs care much better at understanding the organized code.
> Here's a LLM literacy dipstick: ask a peer engineer to read some code they're unfamiliar with. Do they understand it? ... No? Then the LLM won't either.
Of course, but the problem is the converse: There are too many situations where a peer engineer will know what to do but the agent won't. This means that it requires more work to make a codebase understandable to a human than it does to make it understandable to an agent.
> Moving more implementation feedback from human to computer helps us improve the chance of one-shotting... Think of these as bumper rails. You can increase the likelihood of an LLM reaching the bowling pins by making it impossible to land in the gutter.
Sort of, but this is also a little similar to claiming that P = NP. Having a an efficient way to reliably check if a solution is correct is not the same at all as a reliable way to find a solution. It's the theory of computation that tells us that it probably isn't. The likelihood may well be higher yet still not high enough. Even though theoretically NP problems are strictly easier than EXPTIME ones, in practice, in many situations (though not all) they are equally intractable.
In fact, we can put the claim to the test: there are languages, like ATS and Idris, that make almost any property provable and checkable. These languages let the programmer (human or machine) position the "bumper rails" so precisely as to ensure we hit the target. We can ask the agent to write the code, write the proof of correctness, and check it. We'd still need to check that the correctness property is the right one, but if the claim is correct, coding agents should be best at writing code, accompanied by correctness proofs, in ATS or Idris. Are they?
Obviously, mileage mauy vary dependning on the task and the domain, but if it's true that coding models will get significantly better, then the best course of action may well be, in many cases, to just wait until they do rather than spend a lot of effort working around their current limitations, effort that will be wasted if and when capabilities improve. And that's the big question: are we in for a long haul where agent capabilities remain roughly where they are today or not?
I have the complete opposite experience, where once some patterns already exist 2-3 times in the codebase, the LLMs start to accurately replicating them instead of trying to solve everything as one-off solutions.
> You can’t be inconsistent if there are no existing patterns.
"Consistency" shouldn't be equated to "good". If that's your only metric for quality and you don't apply any taste you'll quickly end of with a unmaintainable hodgepodge of second-grade libraries if you let an LLM do its thing in a greenfield project.
The issues raised in this article are why I think highly-opinionated frameworks will lead to higher developer productivity when using AI assisted coding
You may not like all the opinions of the framework, but the LLM knows them and you don’t need to write up any guidelines for it.
Yep. I ran an experiment this morning building the same app in Go, Rust, Bun, Ruby (Rails), Elixir (Phoenix), and C# (ASP whatever). Rails was a done deal almost right away. Bun took a lot of guidance, but I liked the result. The rest was a lot more work with so-so results — even Phoenix, surprisingly.
I liked the Rust solution a lot, but it had 200+ dependencies vs Bun’s 5 and Rails’ 20ish (iirc). Rust feels like it inherited the NPM “pull in a thousand dependencies per problem” philosophy, which is a real shame.
I can vouch for this as someone who works in a 1.6 million line codebase, where there are constant deviations and inconsistent patterns. LLMs have been almost completely useless on it other than for small functions or files.
> This is the garbage in, garbage out principle in action. The utility of a model is bottlenecked by its inputs. The more garbage you have, the more likely hallucinations will occur.
Good read but I wouldn't fully extend the garbage in, garbage out principle to the LLMs. These massive LLMs are trained on internet-scale data, which includes a significant amount of garbage, and still do pretty good. Hallucinations are due to missing or misleading context than from the noise alone. Tech debt heavy code bases though unstructured still provides information-rich context.
> but it feels like a linkedin rehashing of stuff the people at the edge have already known for a while.
You're not wrong, but it bears repeating to newcomers.
The average LLM user I encounter is still just hammering questions into the prompt and getting frustrated when the LLM makes the same mistakes over and over again.
It's like people are rediscovering the most basic principles: E.g. that documentation ("prompt library") is usecho, or that well-organized code leads to higher velocity in development.
Biggest change to my workflow has been to break down projects to smaller parts using libraries. So where I in the past would put everything in the same code base I now break down stuff that can be separate to its own libraries (like wrapping an external API). That way the AI only needs to read the docs for the library instead of having to read all the code when working on features that use the API.
Its kind of crazy that the knee jerk reaction to failing to one shot your prompt is to abandon the whole thing because you think the tool sucks. It very well might, but it could also be user error or a number of other things. There wouldn't be a good nights sleep in sight if I knew an LLM was running rampant all over production code in an effort to "scale it".
There’s always a trade off in terms of alternative approaches. So I don’t think it’s “crazy” that if one fails you switch to a different one. Sure, sometimes persistence can pay off, but not always.
Like if I go to a restaurant for the first time and the item I order is bad, could I go back and try something else? Perhaps, but I could also go somewhere else.
I'm okay with writing developer docs in the form of agent instructions, those are useful for humans too. If they start to get oddly specific or sound mental, then it's obviously the tool at fault.
Just over the weekend, I decided to shell out for the top tier Claude Code to give it a try... definitely an improvement over the year I spent with Github CoPilot enabled on my personal projects (mostly an annoyance more than a help that I eventually disabled altogether).
I've seen some impressive output so far, and have a couple friends that have been using AI generation a lot... I'm trying to create a couple legacy (BBS tech related, in Rust) applications to see how they land. So far mostly planning and structure beyond the time I've spent in contemplation. I'm not sure I can justify the expense long term, but wanting to experience the fuss a bit more to have at least a better awareness.
Is it not the case that "production level code" coming out of these processes makes the whole system of coder-plus-machine weaker?
I find it to be a good thing that the code must be read in order to be production-grade, because that implies the coder must keep learning.
I worry about the collapse in knowledge pipeline when there is very little benefit to overseeing the process...
I say that as a bad coder who can and has done SO MUCH MORE with llm agents. So I'm not writing this as someone who has an ideal of coding that is being eroded. I'm just entering the realm of "what elite coding can do" with LLMs, but I worry for what the realm will lose, even as I'm just arriving
You can't get away from the engineering part of software engineering even if you are using LLMs. I have been using Claude Opus 4.5, and it's the best out of the models I have tried. I find that I can get Claude to work well if I already know the steps I need to do beforehand, and I can get it to do all of the boring stuff. So it's a series of very focused and directed one-shot prompts that it largely gets correct, because I'm not giving it a huge task, or something open-ended.
Knowing how you would implement the solution beforehand is a huge help, because then you can just tell the LLM to do the boring/tedious bits.
They’re good for getting you from A to B. But you need to know A (current state of the code) and how to get to B (desired end state). They’re fast typers not automated engineers.
seriously, I stopped agent mode altogether. I hit it with very specific like: write a function that takes an array of X and returns y.
It almost never fails and usually does it in a neat way, plus its ~50 lines of code so I can copy and paste confidently. Letting the agent just go wild on my code has always been a PITA for me.
“When an LLM can generate a working high-quality implementation in a single try, that is called one-shotting. This is the most efficient form of LLM programming.”
This is a good article, but misses one of the most important advances this year - the agentic loop.
There are always going to be limits to how much code a model can one-shot. Give it the ability to verify its changes and iterate, massively increase its ability to write sizeable chunks of verified and working code.
I’ve ended up with a workflow that lines up pretty closely with the guidance/oversight framing in the article, but with one extra separation that’s been critical for me.
I’m working on a fairly messy ingestion pipeline (Instagram exports → thumbnails → grouped “posts” → frontend rendering). The data is inconsistent, partially undocumented, and correctness is only visible once you actually look at the rendered output. That makes it a bad fit for naïve one-shotting.
What’s worked is splitting responsibility very explicitly:
• Human (me): judge correctness against reality. I look at the data, the UI, and say things like “these six media files must collapse into one post”, “stories should not appear in this mode”, “timestamps are wrong”. This part is non-negotiably human.
• LLM as planner/architect: translate those judgments into invariants and constraints (“group by export container, never flatten before grouping”, “IG mode must only consider media/posts/*”, “fallback must never yield empty output”). This model is reasoning about structure, not typing code.
• LLM as implementor (Codex-style): receives a very boring, very explicit prompt derived from the plan. Exact files, exact functions, no interpretation, no design freedom. Its job is mechanical execution.
Crucially, I don’t ask the same model to both decide what should change and how to change it. When I do, rework explodes, especially in pipelines where the ground truth lives outside the code (real data + rendered output).
This also mirrors something the article hints at but doesn’t fully spell out: the codebase isn’t just context, it’s a contract. Once the planner layer encodes the rules, the implementor can one-shot surprisingly large changes because it’s no longer guessing intent.
The challenges are mostly around discipline:
• You have to resist letting the implementor improvise.
• You have to keep plans small and concrete.
• You still need guardrails (build-time checks, sanity logs) because mistakes are silent otherwise.
But when it works, it scales much better than long conversational prompts. It feels less like “pair programming with an AI” and more like supervising a very fast, very literal junior engineer who never gets tired, which, in practice, is exactly what these tools are good at.
If you're interested in the large codebase... The best I found so far are extended context models.
Using newest Nemotron3 nano, you can put a 1m tokens (about 3 ish megabytes of text) of pure code dump (I use repomix --style markdown) and ask around.
That's been one of the biggest wow moments I had with LLMs so far. Much better experience than any RAG I used
Using AugmentCode's Context Engine you can get this either through their VSCode/JetBrains plugins, their Auggie command line coding agent or by registering their MCP server with your local coding agent like Claude Code. It works far better than painstakingly stuffing your own context manually or having your agent use grep/lsp/etc to try and find what it needs.
This highlights a missing feature of LLM tooling, which is asking questions of the user. I've been experimenting with Gemini in VS Code, and it just fills in missing information by guessing and then runs off writing paragraphs of design and a bunch of code changes that could have been avoided by asking for clarification at the beginning.
Claude does have this specific interface for asking questions now. I've only had it choose to ask me questions on its own a very few times though. But I did have it ask clarifying questions before that interface was even a thing, when I specifically asked it to ask me clarifying questions.
Again, like a junior dev. And like a junior dev, it can also help to ask it to ask / check what its doing "mid-way", i.e. watch what it's doing and stop it, when it's running down some rabbit hole you know is not gonna yield results.
You'd have to make it do that. Here's a cut and paste I keep open on my desktop, I just paste it back in every time things seem to drift:
> Before you proceed, read the local and global Claude.md files and make
sure you understand how we work together. Make sure you never proceed beyond your own understanding.
> Always consult the user anytime you reach a judgment call rather than just proceeding. Anytime you encounter unexpected behavior or errors, always pause and consider the situation. Rather than going in circles, ask the user for help; they are always there and available.
> And always work from understanding; never make assumptions or guess. Never come up with field names, method names, or framework ideas without just going and doing the research. Always look at the code first, search online for documentation, and find the answer to things. Never skip that step and guess when you do not know the answer for certain.
And then the Claude.md file has a much more clearly written out explanation of how we work together and how it's a consultative process where every major judgment call should be prompted to the user, and every single completed task should be tested and also asked for user confirmation that it's doing what it's supposed to do. It tends to work pretty well so far.
"Before you start, please ask me any questions you have about this so I can give you more context. Be extremely comprehensive."
(I got the idea from a Medium article[1].) The LLM will, indeed, stop and ask good questions. It often notices what I've overlooked. Works very well for me!
I'm still learning about how LLMs can be used in coding, but this article helped me understand the importance of giving clear instructions and not relying too much on automation. The point about developers still needing to guide the model really makes sense. Thanks for sharing this!
Why do none of these ever touch on token optimization? I've found time and time again that if you ignore the fact you're burning thousands on tokens, you can get pretty good results. Things like prompt libraries and context.md files tend to just burn more tokens per call.
One thing that helped us as codebases grew was separating decision-making from execution. Let the model reason about intent and scope, but keep execution deterministic and constrained. It reduced drift and made failures much easier to debug once context got large.
mstank|2 months ago
[Research] ask the agent to explain current functionality as a way to load the right files into context.
[Plan] ask the agent to brainstorm the best practices way to implement a new feature or refactor. Brainstorm seems to be a keyword that triggers a better questioning loop for the agent. Ask it to write a detailed implementation plan to an md file.
[clear] completely clear the context of the agent —- better results than just compacting the conversation.
[execute plan] ask the agent to review the specific plan again, sometimes it will ask additional questions which repeats the planning phase again. This loads only the plan into context and then have it implement the plan.
[review & test] clear the context again and ask it to review the plan to make sure everything was implemented. This is where I add any unit or integration tests if needed. Also run test suites, type checks, lint, etc.
With this loop I’ve often had it run for 20-30 minutes straight and end up with usable results. It’s become a game of context management and creating a solid testing feedback loop instead of trying to purely one-shot issues.
jarjoura|2 months ago
The biggest gotcha I found is that these LLMs love to assume that code is C/Python but just in your favorite language of choice. Instead of considering that something should be written encapsulated into an object to maintain state, it will instead write 5 functions, passing the state as parameters between each function. It will also consistently ignore most of the code around it, even if it could benefit from reading it to know what specifically could be reused. So you end up with copy-pasta code, and unstructured copy-pasta at best.
The other gotcha is that claude usually ignores CLAUDE.md. So for me, I first prompt it to read it and then I prompt it to next explore. Then, with those two rules, it usually does a good job following my request to fix, or add a new feature, or whatever, all within a single context. These recent agents do a much better job of throwing away useless context.
I do think the older models and agents get better results when writing things to a plan document, but I've noticed recent opus and sonnet usually end up just writing the same code to the plan document anyway. That usually ends up confusing itself because it can't connect it to the code around the changes as easily.
prmph|2 months ago
I've had models do the complete opposite of what I've put in the plan and guidelines. I've had them go re-read the exact sentences, and still see them come to the opposite conclusion, and my instructions are nothing complex at all.
I used to think one could build a workflow and process around LLMs that extract good value from them consistently, but I'm now not so sure.
I notice that sometimes the model will be in a good state, and do a long chain of edits of good quality. The problem is, it's still a crap-shoot how to get them into a good state.
godzillafarts|2 months ago
We've taken those prompts, tweaked them to be more relevant to us and our stack, and have pulled them in as custom commands that can be executed in Claude Code, i.e. `/research_codebase`, `/create_plan`, and `/implement_plan`.
It's working exceptionally well for me, it helps that I'm very meticulous about reviewing the output and correcting it during the research and planning phase. Aside from a few use cases with mixed results, it hasn't really taken off throughout our team unfortunately.
asim|2 months ago
AlexB138|2 months ago
zingar|2 months ago
It’ll report, “Numbers changed in step 6a therefore it worked” [forgetting the pivotal role of step 2 which failed and as a result the agent should have taken step 6b, not 6a].
Or “there is conclusive evidence that X is present and therefore we were successful” [X is discussed in the plan as the reason why action is NEEDED, not as success criteria].
I _think _ that what is going wrong is context overload and my remedy is to have the agent update every step of the plan with results immediately after action and before moving on to action on the next step.
When things seem off I can then clear context and have the agent review results step by step to debug its own work: “review step 2 of the results. Are the stated results confident with final conclusions? Quote lines from the results verbatim as evidence.”
dfsegoat|2 months ago
At a basic level, they work akin to git-hooks, but they fire up a whole new context whenever certain events trigger (E.g. another agent finishes implementing changes) - and that hook instance is independent of the implementation context (which is great, as for the review case it is a semi-independent reviewer).
zeroCalories|2 months ago
Aurornis|2 months ago
I'm far from an LLM power user, but this is the single highest ROI practice I've been using.
You have to actually observe what the LLM is trying to do each time. Simply smashing enter over and over again or setting it to auto-accept everything will just burn tokens. Instead, see where it gets stuck and add a short note to CLAUDE.md or equivalent. Break it out into sub-files to open for different types of work if the context file gets large.
Letting the LLM churn and experiment for every single task will make your token quota evaporate before your eyes. Updating the context file constantly is some extra work for you, but it pays off.
My primary use case for LLMs is exploring code bases and giving me summaries of which files to open, tracing execution paths through functions, and handing me the info I need. It also helps a lot to add some instructions for how to deliver useful results for specific types of questions.
CPLX|2 months ago
I feel like I spend quite a bit of time telling the thing to look at information it already knows. And I'm talking about when I HAVE actually created various documents to use and prompts.
As a specific example, it regularly just doesn't reference CLAUDE.md and it seems pretty random as to when it decides to drop that out of context. That's including right at session start when it should have it fresh.
JonathanFly|2 months ago
Better than that, ask the LLM. Better than that, have the LLM ask itself. You do still have make sure it doesn't go off the rails, but the LLM itself wrote this to help answer the question:
### Pattern 10: Student Pattern (Fresh Eyes)
*Concept:* Have a sub-agent read documentation/code/prompts "as a newcomer" to find gaps, contradictions, and confusion points that experts miss.
*Why it works:* Developers write with implicit knowledge they don't realize is missing. A "student" perspective catches assumptions, undefined terms, and inconsistencies.
*Example prompt:* ``` Task: "Student Pattern Review
Pretend you are a NEW AI agent who has never seen this codebase. Read these docs as if encountering them for the first time: 1. CLAUDE.md 2. SUB_AGENT_QUICK_START.md
Then answer from a fresh perspective:
## Confusion Points - What was confusing or unclear on first read? - What terms are used without explanation?
## Contradictions - Where do docs disagree with each other? - What's inconsistent?
## Missing Information - What would a new agent need to know that isn't covered?
## Recommendations - Concrete edits to improve clarity
Be honest and critical. Include file:line references." ```
*Uses cases:* Before finalizing new documentation, evaluating prompts for future Agents.
__MatrixMan__|2 months ago
I've been having a lot of fun taking my larger projects and decomposing them into directed graphs where the nodes are nix flakes. If I launch claude code in a flake devshell it has access to only those tools, and it sees the flake.nix and assumes that the project is bounded by the CWD even though it's actually much larger, so its context is small and it doesn't get overwhelmed.
Inputs/outputs are a nice language agnostic mechanism for coordinating between flakes (just gotta remember to `nix flake update --update-input` when you want updated outputs from an adjacent flake). Then I can have them write feature requests for each other and help each other test fixtures and features. I also like watching them debate over a design, they get lazy and assume the other "team" will do the work, but eventually settle on something reasonable.
I've been running with the idea for a few weeks, maybe it's dumb, but I'd be surprised if this kind of rethinking didn't eventually yield a radical shift in how we organize code, even if the details look nothing like what I've come up with. Somehow we gotta get good at partitioning context so we can avoid the worst parts of the exponential increase in token volume that comes from submitting the entire chat session history just to get the next response.
salty_frog|2 months ago
The focus mainly seems to be on enhancing existing workflows to produce code we currently expect - often you hear its like a junior dev.
The type of rethinking you outlined could have code organised in such a way a junior dev would never be able to extend but our 'junior dev' LLM can iterate through changes easily.
I care more about the properties of software e.g. testable, extendable, secure than how it organised.
Gets me to think of questions like
- what is the correlation between how code is organised vs its properties? - what is the optimal organisation of code to facilitate llms to modify and extend software?
quinnjh|2 months ago
unknown|2 months ago
[deleted]
lnx01|2 months ago
dmoy|2 months ago
christophilus|2 months ago
llmslave2|2 months ago
throw-12-16|2 months ago
dmofp|2 months ago
IMO, the best way to raise the floor of LLM performance in codebases is by building meaning into the code base itself ala DDD. If your codebase is hard to understand and grok for a human, it will be the same for an LLM. If your codebase is unstructured and has no definable patterns, it will be harder for an LLM to use.
You can try to overcome this with even more tooling and more workflows but IMO, it is throwing good money after bad. it is ironic and maybe unpopular, but it turns out LLMs prove that all the folks yapping about language and meaning (re: DDD) were right.
DDD & the Simplicity Gospel:
https://oluatte.com/posts/domain-driven-design-simplicity-go...
dj_gitmo|2 months ago
pron|2 months ago
Of course, but the problem is the converse: There are too many situations where a peer engineer will know what to do but the agent won't. This means that it requires more work to make a codebase understandable to a human than it does to make it understandable to an agent.
> Moving more implementation feedback from human to computer helps us improve the chance of one-shotting... Think of these as bumper rails. You can increase the likelihood of an LLM reaching the bowling pins by making it impossible to land in the gutter.
Sort of, but this is also a little similar to claiming that P = NP. Having a an efficient way to reliably check if a solution is correct is not the same at all as a reliable way to find a solution. It's the theory of computation that tells us that it probably isn't. The likelihood may well be higher yet still not high enough. Even though theoretically NP problems are strictly easier than EXPTIME ones, in practice, in many situations (though not all) they are equally intractable.
In fact, we can put the claim to the test: there are languages, like ATS and Idris, that make almost any property provable and checkable. These languages let the programmer (human or machine) position the "bumper rails" so precisely as to ensure we hit the target. We can ask the agent to write the code, write the proof of correctness, and check it. We'd still need to check that the correctness property is the right one, but if the claim is correct, coding agents should be best at writing code, accompanied by correctness proofs, in ATS or Idris. Are they?
Obviously, mileage mauy vary dependning on the task and the domain, but if it's true that coding models will get significantly better, then the best course of action may well be, in many cases, to just wait until they do rather than spend a lot of effort working around their current limitations, effort that will be wasted if and when capabilities improve. And that's the big question: are we in for a long haul where agent capabilities remain roughly where they are today or not?
hobofan|2 months ago
I have the complete opposite experience, where once some patterns already exist 2-3 times in the codebase, the LLMs start to accurately replicating them instead of trying to solve everything as one-off solutions.
> You can’t be inconsistent if there are no existing patterns.
"Consistency" shouldn't be equated to "good". If that's your only metric for quality and you don't apply any taste you'll quickly end of with a unmaintainable hodgepodge of second-grade libraries if you let an LLM do its thing in a greenfield project.
andrewmutz|2 months ago
You may not like all the opinions of the framework, but the LLM knows them and you don’t need to write up any guidelines for it.
christophilus|2 months ago
I liked the Rust solution a lot, but it had 200+ dependencies vs Bun’s 5 and Rails’ 20ish (iirc). Rust feels like it inherited the NPM “pull in a thousand dependencies per problem” philosophy, which is a real shame.
some-guy|2 months ago
laser9|2 months ago
Good read but I wouldn't fully extend the garbage in, garbage out principle to the LLMs. These massive LLMs are trained on internet-scale data, which includes a significant amount of garbage, and still do pretty good. Hallucinations are due to missing or misleading context than from the noise alone. Tech debt heavy code bases though unstructured still provides information-rich context.
CuriouslyC|2 months ago
Decent article but it feels like a linkedin rehashing of stuff the people at the edge have already known for a while.
Aurornis|2 months ago
You're not wrong, but it bears repeating to newcomers.
The average LLM user I encounter is still just hammering questions into the prompt and getting frustrated when the LLM makes the same mistakes over and over again.
blauditore|2 months ago
hu3|2 months ago
victorbjorklund|2 months ago
mym1990|2 months ago
zeroonetwothree|2 months ago
Like if I go to a restaurant for the first time and the item I order is bad, could I go back and try something else? Perhaps, but I could also go somewhere else.
t_tsonev|2 months ago
tracker1|2 months ago
I've seen some impressive output so far, and have a couple friends that have been using AI generation a lot... I'm trying to create a couple legacy (BBS tech related, in Rust) applications to see how they land. So far mostly planning and structure beyond the time I've spent in contemplation. I'm not sure I can justify the expense long term, but wanting to experience the fuss a bit more to have at least a better awareness.
patcon|2 months ago
I find it to be a good thing that the code must be read in order to be production-grade, because that implies the coder must keep learning.
I worry about the collapse in knowledge pipeline when there is very little benefit to overseeing the process...
I say that as a bad coder who can and has done SO MUCH MORE with llm agents. So I'm not writing this as someone who has an ideal of coding that is being eroded. I'm just entering the realm of "what elite coding can do" with LLMs, but I worry for what the realm will lose, even as I'm just arriving
vivin|2 months ago
Knowing how you would implement the solution beforehand is a huge help, because then you can just tell the LLM to do the boring/tedious bits.
teaearlgraycold|2 months ago
ericmcer|2 months ago
It almost never fails and usually does it in a neat way, plus its ~50 lines of code so I can copy and paste confidently. Letting the agent just go wild on my code has always been a PITA for me.
ColinEberhardt|2 months ago
This is a good article, but misses one of the most important advances this year - the agentic loop.
There are always going to be limits to how much code a model can one-shot. Give it the ability to verify its changes and iterate, massively increase its ability to write sizeable chunks of verified and working code.
EastLondonCoder|2 months ago
I’m working on a fairly messy ingestion pipeline (Instagram exports → thumbnails → grouped “posts” → frontend rendering). The data is inconsistent, partially undocumented, and correctness is only visible once you actually look at the rendered output. That makes it a bad fit for naïve one-shotting.
What’s worked is splitting responsibility very explicitly:
• Human (me): judge correctness against reality. I look at the data, the UI, and say things like “these six media files must collapse into one post”, “stories should not appear in this mode”, “timestamps are wrong”. This part is non-negotiably human.
• LLM as planner/architect: translate those judgments into invariants and constraints (“group by export container, never flatten before grouping”, “IG mode must only consider media/posts/*”, “fallback must never yield empty output”). This model is reasoning about structure, not typing code.
• LLM as implementor (Codex-style): receives a very boring, very explicit prompt derived from the plan. Exact files, exact functions, no interpretation, no design freedom. Its job is mechanical execution.
Crucially, I don’t ask the same model to both decide what should change and how to change it. When I do, rework explodes, especially in pipelines where the ground truth lives outside the code (real data + rendered output).
This also mirrors something the article hints at but doesn’t fully spell out: the codebase isn’t just context, it’s a contract. Once the planner layer encodes the rules, the implementor can one-shot surprisingly large changes because it’s no longer guessing intent.
The challenges are mostly around discipline:
• You have to resist letting the implementor improvise.
• You have to keep plans small and concrete.
• You still need guardrails (build-time checks, sanity logs) because mistakes are silent otherwise.
But when it works, it scales much better than long conversational prompts. It feels less like “pair programming with an AI” and more like supervising a very fast, very literal junior engineer who never gets tired, which, in practice, is exactly what these tools are good at.
eurekin|2 months ago
spullara|2 months ago
throw-12-16|2 months ago
Burn through your token limit in agent mode just to thrash around a few more times trying to identify where the agent "misunderstood" the prompt.
The only time LLM's work as coding agents for me is tightly scoped prompts with a small isolated context.
Just throwing an entire codebase into an LLM in an agentic loop seems like a fools errand.
smallerize|2 months ago
skolos|2 months ago
tharkun__|2 months ago
Claude does have this specific interface for asking questions now. I've only had it choose to ask me questions on its own a very few times though. But I did have it ask clarifying questions before that interface was even a thing, when I specifically asked it to ask me clarifying questions.
Again, like a junior dev. And like a junior dev, it can also help to ask it to ask / check what its doing "mid-way", i.e. watch what it's doing and stop it, when it's running down some rabbit hole you know is not gonna yield results.
CPLX|2 months ago
> Before you proceed, read the local and global Claude.md files and make sure you understand how we work together. Make sure you never proceed beyond your own understanding.
> Always consult the user anytime you reach a judgment call rather than just proceeding. Anytime you encounter unexpected behavior or errors, always pause and consider the situation. Rather than going in circles, ask the user for help; they are always there and available.
> And always work from understanding; never make assumptions or guess. Never come up with field names, method names, or framework ideas without just going and doing the research. Always look at the code first, search online for documentation, and find the answer to things. Never skip that step and guess when you do not know the answer for certain.
And then the Claude.md file has a much more clearly written out explanation of how we work together and how it's a consultative process where every major judgment call should be prompted to the user, and every single completed task should be tested and also asked for user confirmation that it's doing what it's supposed to do. It tends to work pretty well so far.
pteetor|2 months ago
"Before you start, please ask me any questions you have about this so I can give you more context. Be extremely comprehensive."
(I got the idea from a Medium article[1].) The LLM will, indeed, stop and ask good questions. It often notices what I've overlooked. Works very well for me!
[1] https://medium.com/@jordan_gibbs/the-most-important-chatgpt-...
zvorygin|2 months ago
Ayanonymous|2 months ago
tschellenbach|2 months ago
But the summary here is that with the right guidance, AI currently crushes it on large codebases.
unknown|2 months ago
[deleted]
avree|2 months ago
Simplita|2 months ago
jukkat|2 months ago
I’d like to see dynamic task-specific context building. Write a prompt and the model starts to collect relevant instructions.
Also a review loop to check that instructions were followed.
eddywebs|2 months ago
uoaei|2 months ago
rootnod3|2 months ago