I'd love to see what is being achieved by these massive parallel agent approaches. If it's so much more productive, where is all the great software that's being built with it? What is the OP building?
Most of what I'm seeing is AI influencers promoting their shovels.
Most software is mundane run of the mill CRUD feature set. Just yesterday I rolled out 5 new web pages and revamped a landing page in under an hour that would have easily taken 3-4 days of back and forth.
There are lot of similar builds happening. Either way, you can try to incorporate AI coding in your coding flow and where it takes.
> If it's so much more productive, where is all the great software that's being built with it?
This is such a new and emerging area that I don't understand how this is a constructive comment on any level.
You can be skeptical of the technology in good faith, but I think one shouldn't be against people being curious and engaging in experimentation. A lot of us are actively trying to see what exactly we can build with this, and I'm not an AI influencer by any means. How do we find out without trying?
I still feel like we're still at a "building tools to build tools" stage in multi-agent coding. A lot of interesting projects springing up to see if they can get many agents to effectively coordinate on a project. If anything, it would be useful to understand what failed and why so one can have an informed opinion.
Even if somebody shows you what they've built with it, you're none the wiser. All you'll know is that it seemingly works well enough for a greenfield project.
The jury is still very far out on how agentic development affects mid/long term speed and quality. Those feedback cycles are measured in years, not weeks. If we bother to measure at all.
People in our field generally don't do what they know works, because by and large, nobody really knows, beyond personal experiences, and I guess a critical mass doesn't even really care. We do what we believe works. Programming is a pop culture.
I just avoided $1.8 million/year in review time w/ parallel agents for a code review workflow.
We have 500+ custom rules that are context sensitive because I work on a large and performance sensitive C++ codebase with cooperative multitasking. Many things that are good are non-intuitive and commercial code review tools don't get 100% coverage of the rules. This took a lot of senior engineering time to review.
Anyways, I set up a massive parallel agent infrastructure in CI that chunks the review guidelines into tickets, adds to a queue, and has agents spit up GitHub code review comments. Then a manager agent validates the comments/suggestions using scripts and posts the review. Since these are coding agents they can autonomously gather context or run code to validate their suggestions.
Instantly reduced mean time to merge by 20% in an A/B test. Assuming 50% of time on review, my org would've needed 285 more review hours a week for the same effect. Super high signal as well, it catches far more than any human can and never gets tired.
Likewise, we can scale this to any arbitrary review task, so I'm looking at adding benchmarking and performance tuning suggestions for menial profiling tasks like "what data structure should I use".
It's for personal use, and I wouldn't call it great software, but I used Claude Code Teams in parallel to create a Fluxbox-compatible window compositor for Wayland [1].
Overall effort was a few days of agentic vibe-coding over a period of about 3 weeks. Would have been faster, but the parallel agents burn though tokens extremely quickly and hit Max plan limits in under an hour.
There’s so much more iOS apps being published that it takes a week to get a dev account, review times are longer, and app volume is way up. It’s not really a thing you’re going to notice or not if you’re just going by vibes.
I work for Snowflake and the code I'm building is internal. I'm exploring open sourcing my main project which I built with this system. I'd love to share it one day!
I'm experimenting with building an agent swarm to take a very large existing app that's been built over the past two decades (internal to the company I work for) and reverse engineer documentation from the code so I can then use that documentation as the basis for my teams to refactor big chunks of old-no-longer-owned-by-anyone features and to build new features using AI better. The initial work to just build a large-scale understanding of exactly what we actually run in prod is a massively parallelizable task that should be a good fit for some documentation writing agents. Early days but so far my experiments seem to be working out.
Obviously no users will see a benefit directly but I reckon it'll speed up delivery of code a lot.
You're not wrong. The current bottleneck is validation. If you use orchestration to ship faster, you have less time to validate what you're building, and the quality goes down.
If you have a really big test suite to build against, you can do more, but we're still a ways off from dark software factories being viable. I guessed ~3 years back in mid 2025 and people thought I was crazy at the time, but I think it's a safe time frame.
In my view, these agent teams have really only become mainstream in the last ~3 weeks since Claude Code released them. Before that they were out there but were much more niche, like in Factory or Ralphie Wiggum.
There is a component to this that keeps a lot of the software being built with these tools underground: There are a lot of very vocal people who are quick with downvotes and criticisms about things that have been built with the AI tooling, which wouldn't have been applied to the same result (or even poorer result) if generated by human.
This is largely why I haven't released one of the tools I've built for internal use: an easy status dashboard for operations people.
Things I've done with agent teams: Added a first-class ZFS backend to ganeti, rebuilt our "icebreaker" app that we use internally (largely to add special effects and make it more fun), built a "filesystem swiss army knife" for Ansible, converted a Lambda function that does image manipulation and watermarking from Pillow to pyvips, also had it build versions of it in go, rust, and zig for comparison sake, build tooling for regenerating our cache of watermarked images using new branding, have it connect to a pair of MS SQL test servers and identify why logshipping was broken between them, build an Ansible playbook to deploy a new AWS account, make a web app that does a simple video poker app (demo to show the local users group, someone there was asking how to get started with AI), having it brainstorm and build 3 versions of a crossword-themed daily puzzle (just to see what it'd come up with, my wife and I are enjoying TiledWords and I wanted to see what AI would come up with).
Those are the most memorable things I've used the agent teams to build in the last 3 weeks. Many of those things are internal tools or just toys, as another reply said. Some of those are publicly released or in progress for release. Most of these are in addition to my normal work, rather than as a part of it.
People are building for themselves. However I’d also reference www.Every.to
They built the popular compound-engineering plugin and have shipped a set of production grade consumer apps. They offer a monthly subscription and keep adding to that subscription by shipping more tools.
There are dozens and dozens of these submitted to Show HN, though increasingly without the title prefix now. This one doesn't seem any more interesting than the others.
The deny list section hit home. I keep seeing agents use unlink instead of rm, or spawn a python subprocess to delete files. Every new rule just taught the agent a new workaround.
Ended up flipping the model — instead of blocking bad actions, require proof of safety before any action runs. No proof, no action. Much harder to route around.
For major, in depth refactors and large scale architectural work, it's really important to keep the agents on-track, to prevent them from assuming or misunderstanding important things, or whatever — I can't imagine what it'd be like doing parallel agents. I don't see how that's useful. And I'm a massive fan of agentic coding!
It's like OpenClaw for me — I love the idea of agentic computer use; but I just don't see how something so unsupervised and unsupervisable is remotely a useful or good idea.
I did a sort of bell curve with this type of workflow over summer.
- Base Claude Code (released)
- Extensive, self-orchestrated, local specs & documentation; ie waterfall for many features/longer term project goals (summer)
- Base Claude Code (today)
Claude Code is getting better at orchestrating it's own subagents for divide/conquer type work.
My problem with these extensive self-orchestrated multi-agent / spec modes is the type of drift and rot of all the changes and then integrated parts of an application that a lot of the time end up in merge conflicts. Aside from my own decision cognitive space, it's also a lot to just generally orchestrate and review. I spent a ton of type enforcing Claude to use the system I put in place including documentation updates and continuous logging of work.
I feel extremely productive with a single Claude Code for a project. Maybe for minor features, I'll launch Claude Code in the web so that it can operate in an isolated space to knock them out and create a PR.
I will plan and annotate extensively for large features, but not many features or broad project specs all at the same time. Annotation and better planning UX, I think, are going to be increasingly important for now. The only augment of Claude Code I have is a hook for plan mode review: https://github.com/backnotprop/plannotator
The merge conflicts and cognitive load are indeed two big struggles with my setup. Going back to a single Claude instances however would mean I’m waiting for things to happen most of the time. What do
you do while Claude is busy?
I certainly don't run 6 at a time, but even with just 1 - if it's doing anything visual - how are folks hooking up screenshots to self verify? And how do you keep an eye on it?
The only solution I've seen on a Mac is doing it on a separate monitor.
I couldn't find a solution here and have built similar things in the past so I took a crack at it using CGVirtualDisplay.
Ended up adding a lot of productivity features and polished until it felt good.
Curious if there are similar solutions out there I just haven't seen.
For macOS, generically, you can run `screencapture -o -l $WINDOW_ID output.png` to screenshot any window. You can list window IDs belonging to a PID with a few lines of Swift (that any agent will generate). Hook this up together and give it as a tool to your agents.
We ran something similar for a browser automation project - multiple agents working on different modules in parallel with shared markdown specs. The bottleneck wasn't the agents, it was keeping their context from drifting. Each tmux pane has its own session state, so you end up with agents that "know" different versions of reality by the second hour.
The spec file helps, but we found we also needed a short shared "ground truth" file the agents could read before taking any action - basically a live snapshot of what's actually done vs what the spec says. Without it, two agents would sometimes solve the same problem in incompatible ways.
Has anyone found a clean way to sync context across parallel sessions without just dumping everything into one massive file?
This maps closely to something we've been exploring in our recent paper. The core issue is that flat context windows don't organize information scalably, so as agents work in parallel they lose track of which version of 'reality' applies to which component. We proposed NERDs (Networked Entity Representation Documents), Wikipedia-style docs that consolidate all info about a code entity (its state, relationships, recent changes) into a single navigable document, corss-linked with other other documents, that any agent can read. The idea is that the shared memory is entity-centered rather than chronological. Might be relevant: https://www.techrxiv.org/users/1021468/articles/1381483-thin...
I’ve been using Steve Yegge’s Beads[1] lightweight issue tracker for this type of multi-agent context tracking.
I only run a couple of agents at a time, but with Beads you can create issues, then agents can assign them to themselves, etc. Agents or the human driver can also add context in epics, and I think you can have perpetual issues which contain context too. Or could make them as a type of issue yourself, it’s a very flexible system.
I've been building agent-doc [1] to solve exactly this. Each parallel Claude Code session gets its own markdown document as the interface (e.g., tasks/plan.md, tasks/auth.md). The agent reads/writes to the document, and a snapshot-based diff system means each submit only processes what changed — comments are stripped, so you can annotate without triggering responses.
The routing layer uses tmux: `agent-doc claim`, `route`, `focus`, `layout` commands manage which pane owns which document, scoped to tmux windows. A JetBrains plugin lets you submit from the IDE with a hotkey — it finds the right pane and sends the skill command.
For context sync across agents, the key insight was: don't sync. Each agent owns one document with its own conversation history. The orchestration doc (plan.md) references feature docs but doesn't duplicate their content. When an agent finishes a feature, its key decisions get extracted into SPEC.md. The documents ARE the shared context — any agent can read any document.
It's been working well for running 4-6 parallel sessions across corky (email client), agent-doc itself, and a JetBrains plugin — all from one tmux window with window-scoped routing.
This is a really cool design, pretty similar to what I've built for implementation planning. I like how iterative it is and that the whole system lives just in markdown. The verify step is a great idea I hadn't made a command yet, thank you!
This seems like it'd be great for solo projects but starts to fall apart for a team with a lot more PRs and distributed state. Heck, I run almost everything in a worktree, so even there the state is distributed. Maybe moving some of the state/plans/etc to Linear et al solves that though.
I’ve been experimenting with a similar pattern but wrapping it in a “factory mode” abstraction (we’re building this at CAS[1]) where you define the spec once after careful planning using a supervisor agent then you let it go and spin up parallel workers against it automatically. It handles task decomposition + orchestration so you’re not manually juggling tmux panes
I don't think number of parallel agents is the right productivity metric, or at least you need to account for agent efficiency.
Imagine a superhuman agent who does not need to run in endless loops. It could generate 100k line code-base in a few minutes or solve smaller features in seconds.
In a way, the inefficiency is what leads people to parallelism. There is only room for it because the agents are slow, perhaps the more inefficient and slower the individual agents are, the more parallel we can be.
Few experiments like gas town, the compiler from Anthropic or the browser from Cursor managed to reach the Rocket stage, though in their reports the jagged intelligence of the LLMs was eerily apparent. Do you think we also need better models?
Even Claude Max x1 if you run 2 agents with Opus in parallel you're going hit limits. You can balance model for use case thou, but I wouldn't expect it to work on any $20 plan even if you use Kimi Code.
I have /fd-verify which I execute with the Worker after its done implementing. I didn’t feel the need to have a separate window / agent for reviewing. The same Worker can review its own code. What would be the benefits of having a separate Reviewer?
Is there a place where people like you go to share ideas around these new ways of working, other than HN? I'm very curious how these new ways of working will develop. In my system, I use voice memo's to capture thoughts and they become more or less what you have as feature designs. I notice I have a lot of ideas throughout the day (Claude chews through them some time later, and when they are worked out I review its plans in Notion; I use Notion because I can upload memos into it from my phone so it's more or less what you call the index). But ideas.. I can only capture them as they come, otherwise they are lost & I don't want to spend time typing them out.
gas9S9zw3P9c|10 hours ago
Most of what I'm seeing is AI influencers promoting their shovels.
vishnugupta|2 minutes ago
Most software is mundane run of the mill CRUD feature set. Just yesterday I rolled out 5 new web pages and revamped a landing page in under an hour that would have easily taken 3-4 days of back and forth.
There are lot of similar builds happening. Either way, you can try to incorporate AI coding in your coding flow and where it takes.
TheCowboy|1 hour ago
This is such a new and emerging area that I don't understand how this is a constructive comment on any level.
You can be skeptical of the technology in good faith, but I think one shouldn't be against people being curious and engaging in experimentation. A lot of us are actively trying to see what exactly we can build with this, and I'm not an AI influencer by any means. How do we find out without trying?
I still feel like we're still at a "building tools to build tools" stage in multi-agent coding. A lot of interesting projects springing up to see if they can get many agents to effectively coordinate on a project. If anything, it would be useful to understand what failed and why so one can have an informed opinion.
fhd2|8 hours ago
The jury is still very far out on how agentic development affects mid/long term speed and quality. Those feedback cycles are measured in years, not weeks. If we bother to measure at all.
People in our field generally don't do what they know works, because by and large, nobody really knows, beyond personal experiences, and I guess a critical mass doesn't even really care. We do what we believe works. Programming is a pop culture.
jjmarr|5 hours ago
We have 500+ custom rules that are context sensitive because I work on a large and performance sensitive C++ codebase with cooperative multitasking. Many things that are good are non-intuitive and commercial code review tools don't get 100% coverage of the rules. This took a lot of senior engineering time to review.
Anyways, I set up a massive parallel agent infrastructure in CI that chunks the review guidelines into tickets, adds to a queue, and has agents spit up GitHub code review comments. Then a manager agent validates the comments/suggestions using scripts and posts the review. Since these are coding agents they can autonomously gather context or run code to validate their suggestions.
Instantly reduced mean time to merge by 20% in an A/B test. Assuming 50% of time on review, my org would've needed 285 more review hours a week for the same effect. Super high signal as well, it catches far more than any human can and never gets tired.
Likewise, we can scale this to any arbitrary review task, so I'm looking at adding benchmarking and performance tuning suggestions for menial profiling tasks like "what data structure should I use".
ecliptik|9 hours ago
Overall effort was a few days of agentic vibe-coding over a period of about 3 weeks. Would have been faster, but the parallel agents burn though tokens extremely quickly and hit Max plan limits in under an hour.
1. https://github.com/ecliptik/fluxland
hombre_fatal|34 minutes ago
schipperai|9 hours ago
onion2k|6 hours ago
Obviously no users will see a benefit directly but I reckon it'll speed up delivery of code a lot.
CuriouslyC|2 hours ago
If you have a really big test suite to build against, you can do more, but we're still a ways off from dark software factories being viable. I guessed ~3 years back in mid 2025 and people thought I was crazy at the time, but I think it's a safe time frame.
conception|10 hours ago
linsomniac|9 hours ago
There is a component to this that keeps a lot of the software being built with these tools underground: There are a lot of very vocal people who are quick with downvotes and criticisms about things that have been built with the AI tooling, which wouldn't have been applied to the same result (or even poorer result) if generated by human.
This is largely why I haven't released one of the tools I've built for internal use: an easy status dashboard for operations people.
Things I've done with agent teams: Added a first-class ZFS backend to ganeti, rebuilt our "icebreaker" app that we use internally (largely to add special effects and make it more fun), built a "filesystem swiss army knife" for Ansible, converted a Lambda function that does image manipulation and watermarking from Pillow to pyvips, also had it build versions of it in go, rust, and zig for comparison sake, build tooling for regenerating our cache of watermarked images using new branding, have it connect to a pair of MS SQL test servers and identify why logshipping was broken between them, build an Ansible playbook to deploy a new AWS account, make a web app that does a simple video poker app (demo to show the local users group, someone there was asking how to get started with AI), having it brainstorm and build 3 versions of a crossword-themed daily puzzle (just to see what it'd come up with, my wife and I are enjoying TiledWords and I wanted to see what AI would come up with).
Those are the most memorable things I've used the agent teams to build in the last 3 weeks. Many of those things are internal tools or just toys, as another reply said. Some of those are publicly released or in progress for release. Most of these are in addition to my normal work, rather than as a part of it.
haolez|10 hours ago
Reebz|5 hours ago
They built the popular compound-engineering plugin and have shipped a set of production grade consumer apps. They offer a monthly subscription and keep adding to that subscription by shipping more tools.
verdverm|10 hours ago
calvinmorrison|8 hours ago
https://git.ceux.org/cashflow.git/
calvinmorrison|8 hours ago
karel-3d|8 hours ago
v_CodeSentinal|1 hour ago
Ended up flipping the model — instead of blocking bad actions, require proof of safety before any action runs. No proof, no action. Much harder to route around.
Curious if you've tried anything similar.
hrimfaxi|27 minutes ago
logicprog|1 hour ago
It's like OpenClaw for me — I love the idea of agentic computer use; but I just don't see how something so unsupervised and unsupervisable is remotely a useful or good idea.
ramoz|8 hours ago
- Base Claude Code (released)
- Extensive, self-orchestrated, local specs & documentation; ie waterfall for many features/longer term project goals (summer)
- Base Claude Code (today)
Claude Code is getting better at orchestrating it's own subagents for divide/conquer type work.
My problem with these extensive self-orchestrated multi-agent / spec modes is the type of drift and rot of all the changes and then integrated parts of an application that a lot of the time end up in merge conflicts. Aside from my own decision cognitive space, it's also a lot to just generally orchestrate and review. I spent a ton of type enforcing Claude to use the system I put in place including documentation updates and continuous logging of work.
I feel extremely productive with a single Claude Code for a project. Maybe for minor features, I'll launch Claude Code in the web so that it can operate in an isolated space to knock them out and create a PR.
I will plan and annotate extensively for large features, but not many features or broad project specs all at the same time. Annotation and better planning UX, I think, are going to be increasingly important for now. The only augment of Claude Code I have is a hook for plan mode review: https://github.com/backnotprop/plannotator
schipperai|7 hours ago
jasonjmcghee|4 hours ago
The only solution I've seen on a Mac is doing it on a separate monitor.
I couldn't find a solution here and have built similar things in the past so I took a crack at it using CGVirtualDisplay.
Ended up adding a lot of productivity features and polished until it felt good.
Curious if there are similar solutions out there I just haven't seen.
https://github.com/jasonjmcghee/orcv
abreis|3 hours ago
danbala|3 hours ago
CloakHQ|9 hours ago
The spec file helps, but we found we also needed a short shared "ground truth" file the agents could read before taking any action - basically a live snapshot of what's actually done vs what the spec says. Without it, two agents would sometimes solve the same problem in incompatible ways.
Has anyone found a clean way to sync context across parallel sessions without just dumping everything into one massive file?
tdaltonc|3 hours ago
oceanic|5 hours ago
I only run a couple of agents at a time, but with Beads you can create issues, then agents can assign them to themselves, etc. Agents or the human driver can also add context in epics, and I think you can have perpetual issues which contain context too. Or could make them as a type of issue yourself, it’s a very flexible system.
[1] https://github.com/steveyegge/beads
schipperai|8 hours ago
briantakita|7 hours ago
The routing layer uses tmux: `agent-doc claim`, `route`, `focus`, `layout` commands manage which pane owns which document, scoped to tmux windows. A JetBrains plugin lets you submit from the IDE with a hotkey — it finds the right pane and sends the skill command.
For context sync across agents, the key insight was: don't sync. Each agent owns one document with its own conversation history. The orchestration doc (plan.md) references feature docs but doesn't duplicate their content. When an agent finishes a feature, its key decisions get extracted into SPEC.md. The documents ARE the shared context — any agent can read any document.
It's been working well for running 4-6 parallel sessions across corky (email client), agent-doc itself, and a JetBrains plugin — all from one tmux window with window-scoped routing.
[1] https://github.com/btakita/agent-doc
servercobra|7 hours ago
This seems like it'd be great for solo projects but starts to fall apart for a team with a lot more PRs and distributed state. Heck, I run almost everything in a worktree, so even there the state is distributed. Maybe moving some of the state/plans/etc to Linear et al solves that though.
schipperai|7 hours ago
aceelric|8 hours ago
[1] https://cas.dev
schipperai|8 hours ago
nferraz|10 hours ago
schipperai|9 hours ago
sluongng|9 hours ago
https://open.substack.com/pub/sluongng/p/stages-of-coding-ag...
I think we need much different toolings to go beyond 1 human - 10 agents ratio. And much much different tooling to achieve a higher ratio than that
Scea91|5 hours ago
Imagine a superhuman agent who does not need to run in endless loops. It could generate 100k line code-base in a few minutes or solve smaller features in seconds.
In a way, the inefficiency is what leads people to parallelism. There is only room for it because the agents are slow, perhaps the more inefficient and slower the individual agents are, the more parallel we can be.
schipperai|8 hours ago
hinkley|9 hours ago
0x457|8 hours ago
schipperai|8 hours ago
gck1|5 hours ago
zwilderrr|6 hours ago
schipperai|5 hours ago
kledru|7 hours ago
schipperai|7 hours ago
philipp-gayret|8 hours ago
schipperai|8 hours ago
aplomb1026|2 hours ago
[deleted]
aplomb1026|8 hours ago
[deleted]
mrorigo|9 hours ago
[deleted]