Show HN: Badge that shows how well your codebase fits in an LLM's context window
85 points| jimminyx | 2 days ago |github.com
Repo Tokens is a GitHub Action that counts your codebase's size in tokens (using tiktoken) and updates a badge in your README. The badge color reflects what percentage of an LLM's context window the codebase fills: green for under 30%, yellow for 50-70%, red for 70%+. Context window size is configurable and defaults to 200k (size of Claude models).
It's a composite action. Installs tiktoken, runs ~60 lines of inline Python, takes about 10 seconds. The action updates the README but doesn't commit, so your workflow controls the git strategy.
The idea is to make token size a visible metric, like bundle size badges for JS libraries. Hopefully a small nudge to keep codebases lean and agent-friendly.
GitHub: https://github.com/qwibitai/nanoclaw/tree/main/repo-tokens
layer8|2 days ago
In the case that interfaces remain unchanged, agents only need to look at the implementation of a single module at a time plus the interfaces it consumes and implements. And when changing interfaces, agents only need to look at the interfaces of the modules concerned, and at most a limited number of implementation considerations.
It’s the very reason why we humans invented modularization: so that we don’t have to hold the complete codebase in our heads (“context windows”) in order to reason about it and make changes to it in a robust and well-grounded way.
sltr|2 days ago
https://www.slater.dev/2026/02/relieve-your-context-anxiety-...
bee_rider|2 days ago
GeoAtreides|2 days ago
nebezb|2 days ago
From a purely UX perspective, showing a red badge seems you’re conflating “less good” with size. Who is the target for this? Lots of useful codebases are large.
I do agree, however, that there’s value in splitting up domains into something a human can easily learn and keep in their head after, say, a few days of being deeply entrenched. Tokens could actually be a good proxy for this.
iterateoften|2 days ago
Agents. Going to be more tools and software targeted for consumption by agents
Doohickey-d|2 days ago
For example in my current case, there are lots of files with CSS, SVG icons in separate files, old database migration scripts, etc. Those don't go in the LLM context 99% of the time.
Maybe a more useful metric would be "what percentage of files that have been edited in the last {n} days fit in the context"?
hennell|2 days ago
Scoping the Ai to only use the things you'd use seems far wiser than trying to reduce your codebase so it can look at the whole thing when 90% of it is irrelevant.
ramoz|2 days ago
But my coolest app was a better context creator. I found it hard to extend to actual agentic coding use. Agentic discovery is generally useful and reliable - the overhead of tokens can be managed by the harness (i.e. Claude Code).
https://prompttower.com/
nicoburns|2 days ago
It is somewhat ironic that coding agents are notorious for generating much more code than necesary!
bilekas|2 days ago
It would be better to have the architecture support a more decoupled/modular design if you're going to rely heavy on LLMs.
That or let it consume high quality maintained documentation?
joshmarlow|2 days ago
I think this gestures at a more general point - we're still focusing on how to integrate LLMs into existing dev tooling paradigms. We squeeze LLMs into IDEs for human dev ergonomics but we should start thinking about LLM dev ergonomics - what idioms and design patterns make software development easiest for AIs?
layer8|2 days ago
f33d5173|2 days ago
> I think this gestures at a more general point - we're still focusing on how to integrate LLMs into existing dev tooling paradigms.
This is what we should be doing. This for a couple reasons. For one thing, humans don't have an entire codebase "in context" at a time. We should be recognizing that the limitations of an AI mirror the limitations of a person, and hence can have similar solutions. For another, the limitations of today's LLMs will not be the limitations of tomorrow's LLMs. Redesigning our code to suit today's limitations will only cause us trouble down the road.
SignalStackDev|2 days ago
[deleted]
collabs|2 days ago
I am not very good with AI though. Is there a quick and easy way to calculate token count and add this to my dump.txt file, ideally using just simple, included by default Linux tools in bash or simple, included by default Windows tools in powershell?
Thank you in advance.
t1amat|2 days ago
b112|2 days ago
Doubt me?
Think back 2 years. Now compare today. Change is at massive speed, and this issue is top line to be resolved in some fashion.
written-beyond|2 days ago
If we look at back 2 years, companies weren't investing into training their LLMs so heavily on code. Any code they got their hands on was what was in the LLMs training corpus, it's well known that the most recent improvements in LLM productivity occurred after they spent millions on different labs to produce more coding datasets for them.
So while LLMs have gotten a lot better at not needing the entire codebase in context at once, because their weights are already so well tuned to development environments they can better infer and index things as needed. However, I fail to see how the context window limitation would no longer be an issue since it's a fundamental part of the real world. Would we get better and more efficient ways of splitting and indexing context windows? Surely. Will that reduce our fear of soiling our contexts with bad prompt response cycles? Probably not...
arscan|2 days ago
Retr0id|2 days ago
Towaway69|2 days ago
Also kind of ironic that small codebases are now in vogue, just when google monolithic repos were so popular.
c0balt|2 days ago
It depends on the provider/model, usually pricing is calculated as $/million tokens with input/output tokens having different per token pricing (output tends to be more expensive than input). Some models also charge more per token if the context size is above a threshold. Cached operations may also reduce the price per token.
OpenRouter has a good overview over provider and models, https://openrouter.ai/models
The math on what people are actually paying is hard to evaluate. Ime, most companies rather buy a subscription than give their developers API keys (as it makes spending predictable).
spicyusername|2 days ago
unglaublich|2 days ago
jannniii|2 days ago
Retr0id|2 days ago
a13o|2 days ago
KingOfCoders|2 days ago
agentica_ai|2 days ago
irishcoffee|2 days ago
jamiecode|12 hours ago
[deleted]
marsven_422|2 days ago
[deleted]
ai-christianson|2 days ago
[deleted]
kccqzy|2 days ago
hal9000xbot|2 days ago
[deleted]