(no title)
aluzzardi | 2 days ago
This is an interesting approach. I definitely agree with the problem statement: if the LLM has to filter by error/fatal because of context window constraints, it will miss crucial information.
We took a different approach: we have a main agent (opus 4.6) dispatching "log research" jobs to sub agents (haiku 4.5 which is fast/cheap). The sub agent reads a whole bunch of logs and returns only the relevant parts to the parent agent.
This is exactly how coding agents (e.g. Claude Code) do it as well. Except instead of having sub agents use grep/read/tail, they use plain SQL.
buryat|2 days ago
And I just wanted to try MCP tooling tbh hehe Took me 2 days to create this to be honest
aluzzardi|2 days ago
- Opus agent wakes up when we detect an incident (e.g. CI broke on main)
- It looks at the big picture (e.g. which job broke) and makes a plan to investigate
- It dispatches narrowly focused tasks to Haiku sub agents (e.g. "extract the failing log patterns from commit XXX on job YYY ...")
- Sub agents use the equivalent of "tail", "grep", etc (using SQL) on a very narrow sub-set of logs (as directed by Opus) and return only relevant data (so they can interpret INFO logs as actually being the problem)
- Parent Opus agent correlates between sub agents. Can decide to spawn more sub agents to continue the investigation
It's no different than what I would do as a human, really. If there are terabytes of logs, I'm not going to read all of them: I'll make a plan, open a bunch of tabs and surface interesting bits.
synergy20|1 day ago