top | item 46711958

Show HN: Grov – Multiplayer for AI coding agents

24 points| tonyystef | 1 month ago |github.com

Hi HN, I'm Tony.

I built Grov (https://grov.dev/) because I hit a wall with current AI coding assistants: they are "single-player." The moment I kill a terminal pane or close a chat session, the high-level reasoning and architectural decisions generated during that session are lost. If a teammate touches that same code an hour later, their agent has to re-derive everything from scratch or read many documentation files for basically any feature implemented or bug fixed.

I wanted to stop writing a lot of docs for everything just to give context to my agents or have to re-explain to my agents what my teammate did and why.

Grov is an open-source context layer that effectively gives your team's AI agents a shared, persistent memory.

Here is the technical approach:

1. Decision-grain memory, not document storage: When you sync a memory, Grov structures knowledge at the decision level. We capture the specific aspect (e.g., "Auth Strategy"), the choice made ("JWT"), and the reasoning ("Stateless for scaling"). Crucially, when your codebase evolves, we don't overwrite memories, we mark old decisions as superseded and link them to the new choice. This gives your team an audit trail of architectural evolution, not just the current snapshot.

2. Git-like branches for memories: Teams experimenting with different approaches can create memory branches. Memories on a feature branch stay isolated until you are ready to merge. Access control mirrors Git: main is team-wide, while feature branches keep noise isolated. When you merge the branch, those accumulated insights become instantly available to everyone's agents.

3. Two-stage injection (Token Optimization): The expensive part of shared memory isn't storage it's the context window. Loading 10 irrelevant memories wastes tokens and confuses the model. Grov uses a "Preview → Expand" strategy: Preview: A hybrid semantic/keyword search returns lightweight memory summaries (~100 tokens). Expand: The full reasoning traces (~500-1k tokens) are only injected if the agent explicitly requests more detail. This typically results in a 50-70% token reduction per session compared to raw context dumping.

The result: Your teammate's agent doesn't waste 5 minutes re-exploring why you chose Postgres over Redis, or re-reading auth middleware. It just knows, because your agent already figured it out and shared it.

Github: https://github.com/TonyStef/Grov

8 comments

order

kristopolous|1 month ago

byterover has been doing something similar for a while. amp was initially doing a variation of this and then pivoted. I built a similar tool about 9 months ago and then abandoned it.

The approach seems tempting but there's something off about it I think I might have figure out.

indigodaddy|1 month ago

exe.dev has pretty much solved this with Shelley

dang|1 month ago

[under-the-rug stub - see https://news.ycombinator.com/item?id=45988611 for explanation]

[guys, don't do this! HN will flame you for it and it will ruin your otherwise fine Show HN thread]

dolevalgam|1 month ago

I really need this with all the sessions open

ambersahdev|1 month ago

Do you deal with memory compaction yourself or let the models handle it?

sintem|1 month ago

dope. let me give it a go.