Show HN: 20+ Claude Code agents coordinating on real work (open source)
53 points| austinbaggio | 17 days ago |github.com
We’ve open-sourced a multi-agent orchestrator that we’ve been using to handle long-running LLM tasks. We found that single LLM agents tend to stall, loop, or generate non-compiling code, so we built a harness for agents to coordinate over shared context while work is in progress.
How it works: 1. Orchestrator agent that manages task decomposition 2. Sub-agents for parallel work 3. Subscriptions to task state and progress 4. Real-time sharing of intermediate discoveries between agents
We tested this on a Putnam-level math problem, but the pattern generalizes to things like refactors, app builds, and long research. It’s packaged as a Claude Code skill and designed to be small, readable, and modifiable.
Use it, break it, tell me about what workloads we should try and run next!
giancarlostoro|17 days ago
* Throw more agents * Use something like Beads
I'm in the latter, I don't have infinite resources, I'd rather stick to one agent and optimize what it can do. When I hit my Claude Code limit, I stop, I use Claude Code primarily for side projects.
gck1|17 days ago
I ignore all Skills, MCPs, and view all of these as distractions that consume context, which leads to worse performance. It's better to observe what agent is doing, where it needs help and just throw a few bits of helpful, sometimes persistent context at it.
You can't observe what 20 agents are doing.
austinbaggio|17 days ago
raniazyane|17 days ago
At some point the interesting question isn’t whether one agent or twenty agents can coordinate better, but which decisions we’re comfortable fully delegating versus which ones feel like they need a human checkpoint.
Multi-agent systems solve coordination and memory scaling, but they also make it easier to move further away from direct human oversight. I’m curious how people here think about where that boundary should sit — especially for tasks that have real downstream consequences.
austinbaggio|17 days ago
More specifically, we've been working on a memory/context observability agent. It's currently really good at understanding users and understanding the wide memory space. It could help with the oversight and at least the introspection part.
joshribakoff|17 days ago
clairekart|17 days ago
austinbaggio|17 days ago
killbot_2000|16 days ago
austinbaggio|13 days ago
miligauss|17 days ago
raphaelmolly8|17 days ago
visarga|17 days ago
austinbaggio|17 days ago
yodon|17 days ago
If your registration process is eventually going to ask me for a username, can the org name and user name be the same?
austinbaggio|17 days ago
austinbaggio|17 days ago
yodon|17 days ago
austinbaggio|17 days ago
hsdev|16 days ago
christinetyip|17 days ago
austinbaggio|17 days ago
Any workloads you want to see? The best are ones that have ways to measure the output being successful, thinking about recreating the C compiler example Anthropic did, but doing it for less than the $20k in tokens they used.
miligauss|17 days ago
slopusila|17 days ago
austinbaggio|17 days ago
zmanian|17 days ago
miligauss|17 days ago