(no title)
buryat | 2 days ago
Lots of logs contain non-interesting information so it easily pollutes the context. Instead, my approach has a TF-IDF classifier + a BERT model on GPU for classifying log lines further to reduce the number of logs that should be then fed to a LLM model. The total size of the models is 50MB and the classifier is written in Rust so it allows achieve >1M lines/sec for classifying. And it finds interesting cases that can be missed by simple grepping
I trained it on ~90GB of logs and provide scripts to retrain the models (https://github.com/ascii766164696D/log-mcp/tree/main/scripts)
It's meant to be used with Claude Code CLI so it could use these tools instead of trying to read the log files
aluzzardi|2 days ago
This is an interesting approach. I definitely agree with the problem statement: if the LLM has to filter by error/fatal because of context window constraints, it will miss crucial information.
We took a different approach: we have a main agent (opus 4.6) dispatching "log research" jobs to sub agents (haiku 4.5 which is fast/cheap). The sub agent reads a whole bunch of logs and returns only the relevant parts to the parent agent.
This is exactly how coding agents (e.g. Claude Code) do it as well. Except instead of having sub agents use grep/read/tail, they use plain SQL.
buryat|2 days ago
And I just wanted to try MCP tooling tbh hehe Took me 2 days to create this to be honest
synergy20|1 day ago
jcgrillo|2 days ago
https://github.com/y-scope/clp
https://www.uber.com/blog/reducing-logging-cost-by-two-order...
buryat|2 days ago
Since the classifier would need to have access to the whole log message I was looking into how search is organized for the CLP compression and see that:
> First, recall that CLP-compressed logs are searchable–a user query will first be directed to dictionary searches, and only matching log messages will be decompressed.
so then yeah it can be combined with a classifier as they get decompressed to get a filtered view at only log lines that should be interesting.
The toughest part is still figuring out what does "interesting" actually mean in this context and without domain knowledge of the logs it would be difficult to capture everything. But I think it's still better than going through all the logs post searching.
ManuelKiessling|2 days ago
buryat|2 days ago
In my tool I was going more of a premise that it's frequently difficult to even say what you're looking for so I wanted to have some step after reading logs to say what should be actually analyzed further which naturally requires to have some model
shad42|2 days ago