top | item 44564248

Context Rot: How increasing input tokens impacts LLM performance

260 points| kellyhongsn | 7 months ago |research.trychroma.com

I work on research at Chroma, and I just published our latest technical report on context rot.

TLDR: Model performance is non-uniform across context lengths, including state-of-the-art GPT-4.1, Claude 4, Gemini 2.5, and Qwen3 models.

This highlights the need for context engineering. Whether relevant information is present in a model’s context is not all that matters; what matters more is how that information is presented.

Here is the complete open-source codebase to replicate our results: https://github.com/chroma-core/context-rot

59 comments

order
[+] posnet|7 months ago|reply
I've definitely noticed this anecdotally.

Especially with Gemini Pro when providing long form textual references, providing many documents in a single context windows gives worse answers than having it summarize documents first, ask a question about the summary only, then provide the full text of the sub-documents on request (rag style or just simple agent loop).

Similarly I've personally noticed that Claude Code with Opus or Sonnet gets worse the more compactions happen, it's unclear to me whether it's just the summary gets worse, or if its the context window having a higher percentage of less relevant data, but even clearing the context and asking it to re-read the relevant files (even if they were mentioned and summarized in the compaction) gives better results.

[+] zwaps|7 months ago|reply
Gemini loses coherence and reasoning ability well before the chat hits the context limitations, and according to this report, it is the best model on several dimensions.

Long story short: Context engineering is still king, RAG is not dead

[+] irskep|7 months ago|reply
"Compactions" are just reducing the transcript to a summary of the transcript, right? So it makes sense that it would get worse because the agent is literally losing information, but it wouldn't be due to context rot.

The thing that would signal context rot is when you approach the auto-compact threshold. Am I thinking about this right?

[+] bayesianbot|7 months ago|reply
I feel like the optimal coding agent would do this automatically - collect and (sometimes) summarize the required parts of code, MCP responses, repo maps etc., then combine the results into a new message in a new 'chat' that would contain all the required parts and nothing else. It's basically what I already do with aider, and I feel the performance (in situations with a lot of context) is way better than any agentic / more automated workflow I've tried so far, but it is a lot of work.
[+] tough|7 months ago|reply
Have you tried NotebookLM which basically does this as an app on the bg (chunking and summarising many docs) and you can -chat- with the full corpus using RAG
[+] lukev|7 months ago|reply
This effect is well known but not well documented so far, so great job here.

It's actually even more significant than it's possible to benchmark easily (though I'm glad this paper has done so.)

Truly useful LLM applications live at the boundaries of what the model can do. That is, attending to some aspect of the context that might be several logical "hops" away from the actual question or task.

I suspect that the context rot problem gets much worse for these more complex tasks... in fact, exponentially so for each logical "hop" which is required to answer successfully. Each hop compounds the "attention difficulty" which is increased by long/distracting contexts.

[+] milchek|7 months ago|reply
Anecdotally, my experience has been that the longer a conversation goes on in Cursor about a new feature or code change, the worse the output gets.

The best results seem to be from clear, explicit instructions and plan up front for a discrete change or feature, with the relevant files to edit dragged into the context prompt.

[+] elmean|7 months ago|reply
Agreed, The flow of Explore -> plan -> code -> test -> commit. Has made things better with clearing the context between steps if it makes sense
[+] 0x457|7 months ago|reply
Yeah, that's why I often save context once there is enough information for work to be done. Then, once I notice regression in quality, I do a summary of work done (still could be a low quality) and add it on top of previous checkpoint.
[+] Workaccount2|7 months ago|reply
What's really needed is a way to easily prune context. If I could go and manually manage the entire chat with a model, I could squeeze way more juice out of a typical ~200k token coding session.

Instead I have a good instance going, but the model fumbles for 20k tokens and then that session heavily rotted. Let me cut it out!

[+] aaronblohowiak|7 months ago|reply
Even just a rollback to previous checkpoint would be killer frsture
[+] snickerdoodle12|7 months ago|reply
Local LLMs let you edit the context however you want, including the responses generated by the LLM so it will later think it said what you want it to say which can help put it on the right track.

LLMs-as-a-service don't offer this because it makes it trivial to bypass their censoring.

[+] steveklabnik|7 months ago|reply
I have experimented with "hey claude i am about to reset your context, please give me a prompt that will allow you to continue your work" and then reviewing that and tweaking it before feeding it back in.
[+] lordswork|7 months ago|reply
/compress is the command to do this in most cli agents
[+] boesboes|7 months ago|reply
Claude code looses the ability to distinguish between it's own mistakes and my instructions. Once it gets confused, start over. The longer the sessions, the more it starts to go in loops or just decides that the test was already broken (despite it breaking it in this session) and that it will just ignore it.

I'm sure it's all my poor prompting and context, but it really seems like Claude has lost 30 iq points last few weeks.

[+] vevoe|7 months ago|reply
No I feel the same way too. I'm on the max plan and I swear it has good days and bad days.
[+] SketchySeaBeast|7 months ago|reply
> I'm sure it's all my poor prompting and context,

Does this not feel like gaslighting we've all now internalized?

[+] blixt|7 months ago|reply
This is one type of problem of information retrieval, but I think the change in performance with context length may be different for non-retrieval answers (such as “what is the edited code for making this button red?” or “which of the above categories does the sentence ‘…’ fall under?”).

One paper that stood out to me a while back was Many-Shot In-Context Learning[1] which showed large positive jumps in performance from filling the context with examples.

As always, it’s important to test one’s problem to know how the LLM changes in behavior for different context contents/lengths — I wouldn’t assume a longer context is always worse.

[1] https://arxiv.org/pdf/2404.11018

[+] orbital-decay|7 months ago|reply
My intuition is that questions that require reasoning always perform worse than direct retrieval questions, without exceptions. Especially when it's about negatives or when distractors are present. You're right though, intuition is not measuring, some relevant numbers would be nice to see.

ICL is a phenomenon separate from long-context performance degradation, they can coexist, similarly to how lost-in-the-middle affects the performance of examples in different positions just as fine.

[+] zwaps|7 months ago|reply
Very cool results, very comprehensive article, many insights!

Media literacy disclaimer: Chroma is a vectorDB company.

[+] philip1209|7 months ago|reply
Chroma does vector, full-text, and regex search. And, it's designed for multitenant workloads typical of AI applications. So, not just a "vectorDB company"
[+] tjkrusinski|7 months ago|reply
Interesting report. Are there recommended sizes for different models? How do I know what works or doesn't for my use case?
[+] lifthrasiir|7 months ago|reply
I recently wrote several novels using Gemini 2.5 Flash and the context rot is noticable but happens far later than what this report implies. In my experience, 50K to 100K tokens were required for it to start to disregard the initial context (e.g. the output language). Maybe a complex task like creative writing makes the impact harder to measure or observe; in any case it remained okay enough for me, once I supplied missing contexts from time to time.
[+] elevaet|7 months ago|reply
Let's hear about these novels - are they good? Are you publishing them?
[+] magicalhippo|7 months ago|reply
Is this due to lack of specific long-context training, or is it more limitations of encoding or similar?

I've noticed this issue as well with smaller local models that have relatively long contexts, say a 8B model with 128k context.

I imagined they performed special recall training for these long context models, but the results seem... not so great.

[+] jpcompartir|7 months ago|reply
Good question, I was wondering the same.

My hunch would be that even if we had a lot more annotated examples of reasoning and retrieval over 10,000+ tokens, the architectures we have today would still be limited.

[+] namibj|7 months ago|reply
It's inherent, see https://arxiv.org/abs/2002.07028 (as I detailed in my sibling comment to yours just now, but before I saw yours). That said, there are architecture sizing ways that allow much better long-context performance at the cost of some short-context performance for a given parameter count and inference compute budget.
[+] mikeve|7 months ago|reply
I've experienced this as well. I'm working on a project for which I wanted to search through transcripts of a video. This is often a very long text. I figured since models like the GPT 4.1 series have very large context windows RAG was not needed but I definitely notice some strange issues, especially on the smaller models. Things like not answering the question that was asked but returning a generic summary of the content.
[+] namibj|7 months ago|reply
On this note I want to point at "Low-Rank Bottleneck in Multi-head Attention Models", which details how attention inherently needs the query dimension to match or exceed the sequence length to allow precise (and especially, sharp) targeting.

It may be that dimension-starved pretrained transformer models rely heavily on context being correctly "tagged" in all relevant aspects the very instant it's inserted into the KV cache, e.g. necessitating negation to be prefixed to a fact instead of allowing post-fix negation. The common LLM chat case is telling the model it just spewed hallucination/wrong claims, and hoping this will help instead of hurt downstream performance as the chat continues. There specifically the negation is very delayed, and thus not present in most tokens that code the hallucinated claims in the KV cache, and thus for lack of sufficient positional precision due to insufficient dimensionality, the transformer can't retroactively attribute the "that was wrong" claim in a retrievable matter to the hallucination tokens.

The result of course being the behavior we experience: hallucinations are corrected by editing the message that triggered them to include discouraging words, as otherwise the thread will become near-useless from the hallucination context pollution.

I do wonder if we have maybe figured out how to do this more scalable than just naively raising the query dimension to get (back?) closer to sequence length.

[0]: https://arxiv.org/abs/2002.07028

[+] tough|7 months ago|reply
this felt intuitively true, great to see some research putting hard numbers on that
[+] kelsey98765431|7 months ago|reply
free hint: a model can be trained to prune or clean up context in a multi shot conversation. the final amount of removed tokens plus the final verifiable reward is itself a verifiable signal. cheers
[+] jgalt212|7 months ago|reply
The industry will fight context rot mitigation efforts. Smaller context windows means less need for 1000s of GPUs. Less need for hyperscalers. The up and to the right narrative falls apart.
[+] jsemrau|7 months ago|reply
Once you are working with local LLMs you quickly run into CUDA Out of Memory error. Managing input context input sizes in prompts is really critical. Also keeps cost down.
[+] kbelder|7 months ago|reply
If you're working with local LLMs, why do you care about cost?