The FTS5 index approach here is right, but I'd push further: pure BM25 underperforms on tool outputs because they're a mix of structured data (JSON, tables, config) and natural language (comments, error messages, docstrings). Keyword matching falls apart on the structured half.
I built a hybrid retriever for a similar problem, compressing a 15,800-file Obsidian vault into a searchable index for Claude Code. Stack is Model2Vec (potion-base-8M, 256-dimensional embeddings) + sqlite-vec for vector search + FTS5 for BM25, combined via Reciprocal Rank Fusion. The database is 49,746 chunks in 83MB. RRF is the important piece: it merges ranked lists from both retrieval methods without needing score calibration, so you get BM25's exact-match precision on identifiers and function names plus vector search's semantic matching on descriptions and error context.
The incremental indexing matters too. If you're indexing tool outputs per-session, the corpus grows fast. My indexer has a --incremental flag that hashes content and only re-embeds changed chunks. Full reindex of 15,800 files takes ~4 minutes; incremental on a typical day's changes is under 10 seconds.
On the caching question raised upthread: this approach actually helps prompt caching because the compressed output is deterministic for the same query. The raw tool output would be different every time (timestamps, ordering), but the retrieved summary is stable if the underlying data hasn't changed.
One thing I'd add to Context Mode's architecture: the same retriever could run as a PostToolUse hook, compressing outputs before they enter the conversation. That way it's transparent to the agent, it never sees the raw dump, just the relevant subset.
Very interesting, one big wrinkle with OP:s approach is exactly that, the structured responses are un-touched, which many tools return. Solution in OP as i understand it is the "execute" method. However, im building an MCP gateway, and such sandboxed execution isnt available (...yet), so your approach to this sounds very clever. Ill spend this day trying that out
The core idea: every MCP tool call dumps raw data into your 200K context window. Context Mode spawns isolated subprocesses — only stdout enters context. No LLM calls, purely algorithmic: SQLite FTS5 with BM25 ranking and Porter stemming.
Since the last post we've seen 228 stars and some real-world usage data. The biggest surprise was how much subagent routing matters — auto-upgrading Bash subagents to general-purpose so they can use batch_execute instead of flooding context with raw output.
Small suggestion: Link to the Cloudflare Code mode post[0] in the blog post where you mentio it. It's linked in the README, but when I saw it in the blog post, I had to Google it.
Really intrigued and def will try, thanks for this.
In connecting the dots (and help me make sure I'm connecting them correctly), context-mode _does not address MCP context usage at all_, correct? You are instead suggesting we refactor or eliminate MCP tools, or apply concepts similar to context_mode in our MCPs where possible?
Context-mode is still very high value, even if the answer is "no," just want to make sure I understand. Also interested in your thoughts about the above.
I write a number of MCPs that work across all Claude surfaces; so the usual "CLI!" isn't as viable an answer (though with code execution it sometimes can be) ...
It strikes me there's more low hanging fruit to pluck re. context window management. Backtracking strikes me as another promising direction to avoid context bloat and compaction (i.e. when a model takes a few attempts to do the right thing, once it's done the right thing, prune the failed attempts out of the context).
Agree. I’d like more fine grained control of context and compaction. If you spend time debugging in the middle of a session, once you’ve fixed the bugs you ought to be able to remove everything related to fixing them out of context and continue as you had before you encountered them. (Right now depending on your IDE this can be quite annoying to do manually. And I’m not aware of any that allow you to snip it out if you’ve worked with the agent on other tasks afterwards.)
I think agents should manage their own context too. For example, if you’re working with a tool that dumps a lot of logged information into context, those logs should get pruned out after one or two more prompts.
Context should be thought of something that can be freely manipulated, rather than a stack that can only have things appended or removed from the end.
Totally agree. Failed attempts are just noise once the right path is found. Auto-detecting retry patterns and pruning them down to the final working version feels very doable, especially for clear cases like lint or compilation fixes.
It feels like the late 1990s all over again, but instead of html and sql, it’s coding agents. This time around, a lot of us are well experienced at software engineering and so we can find optimizations simply by using claude code all day long. We get an idea, we work with ai to help create a detailed design and then let it develop it for us.
Maybe the right answer is “why not both”, but subagents can also be used for that problem. That is, when something isn’t going as expected, fork a subagent to solve the problem and return with the answer.
It’s interesting to imagine a single model deciding to wipe its own memory though, and roll back in time to a past version of itself (only, with the answer to a vexing problem)
I do this with my agents. Basically, every "work" oriented call spawns a subprocess which does not add anything to the parent context window. When the subprocess completes the task, I ask it to 1) provide a complete answer, 2) provide a succinct explanation of how the answer was arrived at, 3) provide a succinct explanation of any attempts which did not work, and 4) Anything learned during the process which may be useful in the future. Then, I feed those 4 answers back to the parent as if they were magically arrived at. Another thing I do for managing context window is, any tool/MCP call has its output piped into a file. The LLM then can only read parts of the file and only add that to its context if it is sufficient. For example, execute some command that produces a lot of output and ultimately ends in "Success!", the LLM can just tail the last line to see if it succeeded. If it did, the rest of the output doesn't need to be read. if it fails, usually the failure message is at the end of the log. Something I'm working on now is having a smaller local model summarize the log output and feed that summarization to the more powerful LLM (because I can run my local model for ~free, but it is no where near as capable as the cloud models). I don't keep up with SOTA so I have no idea if what I'm doing is well known or not, but it works for me and my set up.
This post made me realize I had zero visibility into where my Claude Code tokens were actually going, so I built a small companion CLI this morning: https://github.com/vexorkai/claude-trace
It parses ~/.claude/projects/*/*.jsonl and breaks usage down by session, tool, project, and timeline with cost estimates (including cache read/create split).
Context Mode solves output compression really well; this is more of a measurement layer so you can see where the burn is before/after changes.
The hooks seem too aggressive. Blocking all curl/wget/WebFetch and funneling everything through the sandbox for 56 KB snapshots sounds great, but not for curl api.example.com/health returning 200 bytes.
Compressing 153 git commits to 107 bytes means the LLM has to write the perfect extraction script before it can see the data. So if it writes a `git log --oneline | wc -l` when you needed specific commit messages, that information is gone.
The benchmarks assume the model always writes the right summarization code, which in practice it doesn't.
Do you need 80+ tools in context? Even if reduced, why not use sub agents for areas of focus? Context is gold and the more you put into it unrelated to the problem at hand the worse your outcome is. Even if you don't hit the limit of the window. Would be like compressing data to read into a string limit rather than just chunking the data
That's a fair point and honestly the ideal approach. But in practice most people don't hand-curate their MCP server list per task. They install 5-6 servers and suddenly have 80 tools loaded by default. Context-mode doesn't solve the tool definition bloat, that's the input side problem. It handles the output side, when those tools actually run and dump data back. Even with a focused set of tools, a single Playwright snapshot or git log can burn 50k tokens. That's what gets sandboxed.
A lot of this token usage can be avoided by using CLI apps instead of MCPs. For example, the github cli is very robust and does the job of the MCP with a fraction of the token cost
yeah gh cli in particular is lean. though `gh pr view --json body,comments` can still flood context fast. the real win here is gatekeeping what hits context at all, regardless of source.
Not bad, but it sacrifices accuracy and there are risks of causing more hallucinations from having incomplete data or agent writing bad extraction logic. So the whole MCP assumes Claude is smart enough to write good extraction scripts AND formulate good search queries. I'm sure thing could expand in the future to something better, but information preservation is a real issue in my experience.
Nice, I like the idea. It sounds like qualitatively you haven't had any performance regressions while doing this, but have you tested it at all on any sort of benchmark or similar eval? I'm curious how well the actual system performs with less context like this. I mean it's possible it actually improves...
AFAIK Claude Code doesn't inject all the MCP output into the context. It limits 25k tokens and uses bash pipe operators to read the full output. That's at least what I see in the latest version.
That's true, Claude Code does truncate large outputs now. But 25k tokens is still a lot, especially when you're running multiple tools back to back. Three or four Playwright snapshots or a batch of GitHub issues and you've burned 100k tokens on raw data you only needed a few lines from. Context-mode typically brings that down to 1-2k per call while keeping the full output searchable if you need it later.
This sounds a little bit like rkt? Which trims output from other CLI applications like git, find and the most common tools used by Claude. This looks like it goes a little further which is interesting.
I see some of these AI companies adopting some of these ideas sooner or later. Trim the tokens locally to save on token usage.
Haven't looked at rtk closely but from the description it sounds like it works at the CLI output level, trimming stdout before it reaches the model. Context-mode goes a bit further since it also indexes the full output into a searchable FTS5 database, so the model can query specific parts later instead of just losing them. It's less about trimming and more about replacing a raw dump with a summary plus on-demand retrieval.
Pretty sure you’re losing vital information from cross-session contex. You may be able to work longer in a single session, but Claude will degrade without even the mundane details. It’s like doing math without showing your work.
The compression numbers look great but I keep wondering: does the model actually produce equivalent output with compressed context vs full context? Extending sessions from 30min to 3hrs only matters if reasoning quality holds up in hour 2.
esafak's cache economics point is underrated. With prompt caching, verbose context that gets reused is basically free. If compression breaks cache continuity you might save tokens while spending more money.
The deeper issue is that most MCP tools do SELECT * when they should return summaries with drill-down. That's a protocol design problem, not a compression problem.
> With prompt caching, verbose context that gets reused is basically free.
But it's not. It might be discounted cost-wise, however it will still degrade attention and make generation slower/more computationally expensive even if you have a long prefix you can reuse during prefill.
We do a fun variant of this for louie.ai when working with database and especially log systems -- think incident response, SRE, devops, outage investigations: instead of returning DB query results to the LLM, we create dataframes (think in-memory parquet). These directly go into responses with token-optimized summary views, including hints like "... + 1M rows", so the LLM doesn't have to drown in logs and can instead decide to drill back into the dataframe more intelligently. Less iterative query pressure on operational systems, faster & cheaper agentic reasoning iterations, and you get a nice notebook back with the interactive data views.
A curious thing about the MCP protocol is it in theory supports alternative content types like binary ones. That has made me curious about shifting much of the data side of the MCP universe from text/json to Apache Arrow, and making agentic harnesses smarter about these just as we're doing in louie.
Excited to try this. Is this not in effect a kind of "pre-compaction," deciding ahead of time what's relevant? Are there edge cases where it is unaware of, say, a utility function that it coincidentally picks up when it just dumps everything?
Yeah it's basically pre-compaction, you're right. The key difference is nothing gets thrown away. The full output sits in a searchable FTS5 index, so if the model realizes it needs some detail it missed in the summary, it can search for it. It's less "decide what's relevant upfront" and more "give me the summary now, let me come back for specifics later."
Talk about timely. I was just experimenting with a data provider's new MCP server, and I was able to use up my entire Claude Max token limit in under a minute.
Nice approach. Same core idea as context-mode but specialized for your build domain. You're using SQLite as a structured knowledge cache over YAML rule files with keyword lookup. Context-mode does something similar but domain-agnostic, using FTS5 with BM25 ranking so any tool output becomes searchable without needing predefined schemas. Cool to see the pattern emerge independently from a completely different use case.
I've seen a few projects like this. Shouldn't they in theory make the llms "smarter" by not polluting the context? Have any benchmarks shown this effect?
That's the theory and it does hold up in practice. When context is 70% raw logs and snapshots, the model starts losing track of the actual task. We haven't run formal benchmarks on answer quality yet, mostly focused on measuring token savings. But anecdotally the biggest win is sessions lasting longer before compaction kicks in, which means the model keeps its full conversation history and makes fewer mistakes from lost context.
Thanks for this. I do most of my work in subagents for better parallelization. Is it possible to have it work there? Currently the stats say subagents didn't benefit from it.
Would be interested to know if this architecture facilitates dynamic context injection from external knowledge sources without inflating the payload again.
If this breaks the cache it is penny wise, pound foolish; cached full queries have more information and are cheap. The article does not mention caching; does anyone know?
I just enable fat MCP servers as needed, and try to use skills instead.
It doesn't break the cache. The raw data never enters the conversation history, so there's nothing to invalidate. A short summary goes into context instead of the full payload, and the model can search the full data from a local FTS5 index if it needs specifics later. Cache stays intact because you're just appending smaller messages to the conversation.
> Bun auto-detected for 3–5x faster JS/TS execution
This is quite a claim, and even so, doesn't matter since the bottleneck is the LLM and not the JS interpreter. It's a nit, but little things like this just make the project look bad overall. It feels like nobody took the time to read the copy before publishing it.
More importantly, the claimed 98% context savings are noise without benchmarks of harness performance with and without "context mode".
I'm glad someone is working on this, but I just feel like this is not a serious solution to the problem.
No magic — standard Unix process inheritance. Each execute() spawns a child process via Node's child_process.spawn() with a curated env built by #buildSafeEnv (https://github.com/mksglu/claude-context-mode/blob/main/cont...). It passes through an explicit allowlist of auth vars (GH_TOKEN, AWS_ACCESS_KEY_ID, GOOGLE_APPLICATION_CREDENTIALS, KUBECONFIG, etc.) plus HOME and XDG paths so CLI tools find their config files on disk. No state persists between calls — each subprocess inherits credentials from the MCP server's environment, runs, and exits. This works because tools like gh and aws resolve auth on every invocation anyway (env vars or ~/.config files). The tradeoff is intentional: allowlist over full process.env so the sandbox doesn't leak unrelated vars.
blakec|17 hours ago
I built a hybrid retriever for a similar problem, compressing a 15,800-file Obsidian vault into a searchable index for Claude Code. Stack is Model2Vec (potion-base-8M, 256-dimensional embeddings) + sqlite-vec for vector search + FTS5 for BM25, combined via Reciprocal Rank Fusion. The database is 49,746 chunks in 83MB. RRF is the important piece: it merges ranked lists from both retrieval methods without needing score calibration, so you get BM25's exact-match precision on identifiers and function names plus vector search's semantic matching on descriptions and error context.
The incremental indexing matters too. If you're indexing tool outputs per-session, the corpus grows fast. My indexer has a --incremental flag that hashes content and only re-embeds changed chunks. Full reindex of 15,800 files takes ~4 minutes; incremental on a typical day's changes is under 10 seconds.
On the caching question raised upthread: this approach actually helps prompt caching because the compressed output is deterministic for the same query. The raw tool output would be different every time (timestamps, ordering), but the retrieved summary is stable if the underlying data hasn't changed.
One thing I'd add to Context Mode's architecture: the same retriever could run as a PostToolUse hook, compressing outputs before they enter the conversation. That way it's transparent to the agent, it never sees the raw dump, just the relevant subset.
thecopy|9 hours ago
danw1979|13 hours ago
I suspect the obsessive note-taker crowd on HN would appreciate it too.
tclancy|9 hours ago
mksglu|1 day ago
The core idea: every MCP tool call dumps raw data into your 200K context window. Context Mode spawns isolated subprocesses — only stdout enters context. No LLM calls, purely algorithmic: SQLite FTS5 with BM25 ranking and Porter stemming.
Since the last post we've seen 228 stars and some real-world usage data. The biggest surprise was how much subagent routing matters — auto-upgrading Bash subagents to general-purpose so they can use batch_execute instead of flooding context with raw output.
Source: https://github.com/mksglu/claude-context-mode Happy to answer any architecture questions.
lkbm|18 hours ago
[0] https://blog.cloudflare.com/code-mode-mcp/
re5i5tor|1 day ago
In connecting the dots (and help me make sure I'm connecting them correctly), context-mode _does not address MCP context usage at all_, correct? You are instead suggesting we refactor or eliminate MCP tools, or apply concepts similar to context_mode in our MCPs where possible?
Context-mode is still very high value, even if the answer is "no," just want to make sure I understand. Also interested in your thoughts about the above.
I write a number of MCPs that work across all Claude surfaces; so the usual "CLI!" isn't as viable an answer (though with code execution it sometimes can be) ...
Edit: typo
nextaccountic|18 hours ago
nitinreddy88|21 hours ago
esafak|1 day ago
nr378|1 day ago
It strikes me there's more low hanging fruit to pluck re. context window management. Backtracking strikes me as another promising direction to avoid context bloat and compaction (i.e. when a model takes a few attempts to do the right thing, once it's done the right thing, prune the failed attempts out of the context).
elephanlemon|1 day ago
I think agents should manage their own context too. For example, if you’re working with a tool that dumps a lot of logged information into context, those logs should get pruned out after one or two more prompts.
Context should be thought of something that can be freely manipulated, rather than a stack that can only have things appended or removed from the end.
mksglu|1 day ago
jonnycoder|1 day ago
ip26|1 day ago
It’s interesting to imagine a single model deciding to wipe its own memory though, and roll back in time to a past version of itself (only, with the answer to a vexing problem)
IncreasePosts|21 hours ago
vexorkai|10 hours ago
It parses ~/.claude/projects/*/*.jsonl and breaks usage down by session, tool, project, and timeline with cost estimates (including cache read/create split).
Context Mode solves output compression really well; this is more of a measurement layer so you can see where the burn is before/after changes.
Disclosure: I built it.
ericpauley|9 hours ago
/context?
hereme888|15 hours ago
Compressing 153 git commits to 107 bytes means the LLM has to write the perfect extraction script before it can see the data. So if it writes a `git log --oneline | wc -l` when you needed specific commit messages, that information is gone.
The benchmarks assume the model always writes the right summarization code, which in practice it doesn't.
sagarpatil|13 hours ago
specialp|1 day ago
mksglu|1 day ago
muddi900|10 hours ago
bear3r|5 hours ago
startages|13 hours ago
nharada|2 hours ago
buremba|1 day ago
mksglu|1 day ago
andai|1 day ago
https://www.youtube.com/watch?v=bctjSvn-OC8
giancarlostoro|1 day ago
I see some of these AI companies adopting some of these ideas sooner or later. Trim the tokens locally to save on token usage.
https://github.com/rtk-ai/rtk
mksglu|1 day ago
RyanShook|1 day ago
ChicagoDave|4 hours ago
sarkarsh|9 hours ago
esafak's cache economics point is underrated. With prompt caching, verbose context that gets reused is basically free. If compression breaks cache continuity you might save tokens while spending more money.
The deeper issue is that most MCP tools do SELECT * when they should return summaries with drill-down. That's a protocol design problem, not a compression problem.
qeternity|9 hours ago
But it's not. It might be discounted cost-wise, however it will still degrade attention and make generation slower/more computationally expensive even if you have a long prefix you can reuse during prefill.
lmeyerov|14 hours ago
A curious thing about the MCP protocol is it in theory supports alternative content types like binary ones. That has made me curious about shifting much of the data side of the MCP universe from text/json to Apache Arrow, and making agentic harnesses smarter about these just as we're doing in louie.
vishalw007|15 hours ago
1. Can this help me? 2. How?
Thanks for sharing and building this.
wener|7 hours ago
WesBrownSQL|13 hours ago
mvkel|1 day ago
mksglu|1 day ago
jnwatson|6 hours ago
theusus|10 hours ago
unxmaal|1 day ago
mksglu|1 day ago
ZeroGravitas|1 day ago
mksglu|1 day ago
agrippanux|1 day ago
mksglu|1 day ago
killingtime74|20 hours ago
dave_meshimize|10 hours ago
esafak|1 day ago
I just enable fat MCP servers as needed, and try to use skills instead.
mksglu|1 day ago
clouedoc|13 hours ago
> Bun auto-detected for 3–5x faster JS/TS execution
This is quite a claim, and even so, doesn't matter since the bottleneck is the LLM and not the JS interpreter. It's a nit, but little things like this just make the project look bad overall. It feels like nobody took the time to read the copy before publishing it.
More importantly, the claimed 98% context savings are noise without benchmarks of harness performance with and without "context mode".
I'm glad someone is working on this, but I just feel like this is not a serious solution to the problem.
BeetleB|17 hours ago
I see your problem.
monkpit|17 hours ago
agentifysh|15 hours ago
jamiecode|12 hours ago
[deleted]
SignalStackDev|1 day ago
[deleted]
aplomb1026|1 day ago
[deleted]
formvoltron|1 day ago
[deleted]
jamiecode|1 day ago
[deleted]
mksglu|1 day ago
NamlchakKhandro|20 hours ago
Use skills and cli instead.
medi8r|16 hours ago