top | item 47181801

We gave terabytes of CI logs to an LLM

226 points| shad42 | 2 days ago |mendral.com

109 comments

order

buryat|2 days ago

I just wrote a tool for reducing logs for LLM analysis (https://github.com/ascii766164696D/log-mcp)

Lots of logs contain non-interesting information so it easily pollutes the context. Instead, my approach has a TF-IDF classifier + a BERT model on GPU for classifying log lines further to reduce the number of logs that should be then fed to a LLM model. The total size of the models is 50MB and the classifier is written in Rust so it allows achieve >1M lines/sec for classifying. And it finds interesting cases that can be missed by simple grepping

I trained it on ~90GB of logs and provide scripts to retrain the models (https://github.com/ascii766164696D/log-mcp/tree/main/scripts)

It's meant to be used with Claude Code CLI so it could use these tools instead of trying to read the log files

aluzzardi|2 days ago

Mendral co-founder here and author of the post.

This is an interesting approach. I definitely agree with the problem statement: if the LLM has to filter by error/fatal because of context window constraints, it will miss crucial information.

We took a different approach: we have a main agent (opus 4.6) dispatching "log research" jobs to sub agents (haiku 4.5 which is fast/cheap). The sub agent reads a whole bunch of logs and returns only the relevant parts to the parent agent.

This is exactly how coding agents (e.g. Claude Code) do it as well. Except instead of having sub agents use grep/read/tail, they use plain SQL.

sollewitt|2 days ago

But does it work? I’ve used LLMs for log analysis and they have been prone to hallucinate reasons: depending on the logs the distance between cause and effects can be larger than context, usually we’re dealing with multiple failures at once for things to go badly wrong, and plenty of benign issues throw scary sounding errors.

aluzzardi|2 days ago

Post author here.

Yes, it works really well.

1) The latest models are radically better at this. We noticed a massive improvement in quality starting with Sonnet 4.5

2) The context issue is real. We solve this by using sub agents that read through logs and return only relevant bits to the parent agent’s context

verdverm|2 days ago

It can, like all the other tasks, it's not magic and you need to make the job of the agent easier by giving it good instructions, tools, and environments. It's exactly the same thing that makes the life of humans easier too.

This post is a case study that shows one way to do this for a specific task. We found an RCA to a long-standing problem with our dev boxes this week using Ai. I fed Gemini Deep Research a few logs and our tech stack, it came back with an explanation of the underlying interactions, debugging commands, and the most likely fix. It was spot on, GDR is one of the best debugging tools for problems where you don't have full understanding.

If you are curious, and perhaps a PSA, the issue was that Docker and Tailscale were competing on IP table updates, and in rare circumstances (one dev, once every few weeks), Docker DNS would get borked. The fix is to ignore Docker managed interfaces in NetworkManager so Tailscale stops trying to do things with them.

shad42|2 days ago

Mendral co-founder here, we built this infra to have our agent detect CI issues like flaky tests and fix them. Observing logs are useful to detect anomalies but we also use those to confirm a fix after the agent opens a PR (we have long coding sessions that verifies a fixe and re-run the CI if needed, all in the same agent loop).

So yes it works, we have customers in production.

hardolaf|1 day ago

I can't get an LLM to properly handle analyzing a single 200K+ line log without making things up so whatever anyone is saying about this "working" is probably a lie.

kburman|2 days ago

Honestly, with recent models, these types of tasks are very much possible. Now it mostly depends on whether you are using the model correctly or not.

PaulHoule|2 days ago

My first take is that you could have 10 TB of logs with just a few unique lines that are actually interesting. So I am not thinking "Wow, what impressive big data you have there" but rather "if you have an accuracy of 1-10^-6 you are still are overwhelmed with false positives" or "I hope your daddy is paying for your tokens"

jcgrillo|2 days ago

Yeah this is my experience with logs data. You only actually care about O(10) lines per query, usually related by some correlation ID. Or, instead of searching you're summarizing by counting things. In that case, actually counting is important ;).

In this piece though--and maybe I need to read it again--I was under the impression that the LLM's "interface" to the logs data is queries against clickhouse. So long as the queries return sensibly limited results, and it doesn't go wild with the queries, that could address both concerns?

aluzzardi|2 days ago

Mendral co-founder and post author here.

I agree with your statement and explained in a few other comments how we're doing this.

tldr:

- Something happens that needs investigating

- Main (Opus) agent makes focused plan and spawns sub agents (Haiku)

- They use ClickHouse queries to grab only relevant pieces of logs and return summaries/patterns

This is what you would do manually: you're not going to read through 10 TB of logs when something happens; you make a plan, open a few tabs and start doing narrow, focused searches.

Yizahi|2 days ago

We have an ongoing effort in parsing logs for our autotests to speed up debug. It is vary hard to do, mainly because there is a metric ton of false positives or plain old noise even in the info logs. Tracing the culprit can be also tricky, since an error in container A can be caused by the actual failure in the container B which may in turn depend on something entirely else, including hardware problems.

Basically a surefire way to train LLM to parse logs and detect real issues almost entirely depends on the readability and precision of logging. And if logging is good enough then humans can do debug faster and more reliable too :) . Unfortunately people reading logs and people coding them are almost not intersecting in practice and so the issue remains.

hinkley|2 days ago

I think there’s too many expectations around what logging is for and getting everyone on the same page is difficult.

Meanwhile stats have fewer expectations, and moving signal out of the logs into stats is a much much smaller battle to win. It can’t tell you everything, but what it can tell you is easier to make unambiguous.

Over time I got people to stop pulling up Splunk as an automatic reflex and start pulling up Grafana instead for triage.

shad42|2 days ago

Yeah it sounds very familiar with what we went through while building this agent. We're focused on CI logs for now because we wanted something that works really well for things like flaky tests, but planning to expand the context to infrastructure logs very soon.

rurban|1 day ago

I gave lots of prolog rules to analyze log files of a complicated distributed system with 20 realtime components to find problems and root causes. Worked really well. In 2008 or so

Cannot believe that LLM are that useful. When ever a component changes or adds a log line, you edit one rule. With an LLM you need weeks of new logs and then weeks to retrain. And a high budget for the H100`s

nikita2206|1 day ago

That’s not the state of LLMs today, nobody trains them for a specific use case, almost nobody fine tunes them either. You just have to give them some context and the means to gather more context (access to code in order to see the logs at source, access to logs themselves, etc.) - whatever you would have access to as a human debugging this.

pphysch|2 days ago

"Logs" is doing some heavy lifting here. There's a very non-trivial step in deciding that a particular subset and schema of log messages deserves to be in its own columnar data table. It's a big optimization decision that adds complexity to your logging stack. For a narrow SaaS product that is probably a no-brainer.

I would like to see this approach compared to a more minimal approach with say, VictoriaLogs where the LLM is taught to use LogsQL, but overall it's a more "out of the box" architecture.

masterj|2 days ago

> There's a very non-trivial step in deciding that a particular subset and schema of log messages deserves to be in its own columnar data table.

IIUC this is addressed with the ClickHouse JSON type which can promote individual fields in unstructured data into its own column: https://clickhouse.com/blog/a-new-powerful-json-data-type-fo...

Parquet is getting a VARIANT data type which can do the same thing (called "shredding") but in a standards-based way: https://parquet.apache.org/blog/2026/02/27/variant-type-in-a...

verdverm|2 days ago

This is one of those HN posts you share internally in the hopes you can work this into your sprint

TKAB|2 days ago

That post reads like fully LLM-generated. It's basically boasting a list of numbers that are supposed to sound impressive. If there's a coherent story, it's well hidden.

esafak|2 days ago

Forgive me if this is tangential to the debate, but I am trying to understand Mendral's value proposition. Is it that you save users time in setting up observability for CI? Otherwise could you not simply use gh to fetch the logs, their observability system's API or MCP, and cross check both against the code? Or is there a machine learning system that analyzes these inputs beyond merely retrieving context for the LLM? Good luck!

shad42|2 days ago

Mendral is replacing a human Platform Engineer. It debugs the CI logs, look at the commit associated, look at the implementation of the tests, etc... It then proposes fixes and takes care of opening a PR.

We wrote about how this works for PostHog: https://www.mendral.com/blog/ci-at-scale

gabeh|2 days ago

SQL has always been my favorite "loaded gun" api. If you have a control plane of RLS + role based auth and you've got a data dictionary it is trivial to get to a data explorer chat interaction with an LLM doing the heavy lifting.

shad42|2 days ago

100% and LLMs have tons of related training data

prodini|1 day ago

This is a great example of RAG done right, feeding domain-specific data to an LLM instead of relying on generic training. The signal-to-noise ratio in CI logs is brutal though. Curious how you handled deduplication and filtering before embedding? In my experience that preprocessing step makes or breaks the quality of retrieval.

Noumenon72|2 days ago

Are you affiliated with Inngest at all? Great ad for them.

shad42|1 day ago

In some ways: we use their product and they use Mendral

the_arun|2 days ago

The article doesn't mention about which LLM or total cost. Because if they have used ChatGPT or such, the token cost itself should be very expensive, right?

shad42|2 days ago

There is a cost associated with each investigation (that the Mendral agent is doing). And we spend time tuning the orchestration between agents. Yes expensive but we're making money on top of what it costs us. So far we were able to take the cost down while increasing the relevance of each root cause analysis.

We're writing another post about that specifically, we'll publish it sometimes next week

Jerry2|2 days ago

Which LLM did they use? I'd really like to learn some details behind how they setup the whole pipeline. Anyone know? Thanks!

aluzzardi|1 day ago

It started with Sonnet 4.0 as a single agent and now it’s a mix of Opus 4.6 and Haiku 4.5 agents.

Opus plans the investigation and orchestrates the searches.

Haiku is the one actually querying ClickHouse and returning relevant bits

sathish316|2 days ago

SQL is the best exploratory interface for LLMs. But, most of Observability data like Metrics, Logs, Traces we have today are hidden in layers of semantics, custom syntax that’s hard for an agent to translate from explore or debug intent to the actual query language.

Large scale data like metrics, logs, traces are optimised for storage and access patterns and OLAP/SQL systems may not be the most optimal way to store or retrieve it. This is one of the reasons I’ve been working on a Text2SQL / Intent2SQL engine for Observability data to let an agent explore schema, semantics, syntax of any metrics, logs data. It is open sourced as Codd Text2SQL engine - https://github.com/sathish316/codd_query_engine/

It is far from done and currently works for Prometheus,Loki,Splunk for few scenarios and is open to OSS contributions. You can find it in action used by Claude Code to debug using Metrics and Logs queries:

Metric analyzer and Log analyzer skills for Claude code - https://github.com/sathish316/precogs_sre_oncall_skills/tree...

mr-karan|2 days ago

Agreed on SQL being the best exploratory interface for agents. I've been building Logchef[1], an open-source log viewer for ClickHouse, and found the same thing — when you give an LLM the table schema, it writes surprisingly good ClickHouse SQL. I support both a simpler DSL (LogchefQL, compiles to type-aware SQL on the backend) and raw SQL, and honestly raw SQL wins for the agent use case — more flexible, more training data in the corpus.

I took this a few steps further beyond the web UI's AI assistant. There's an MCP server[2] so any AI assistant (Claude Desktop, Cursor, etc.) can discover your log sources, introspect schemas, and query directly. And a Rust CLI[3] with syntax highlighting and `--output jsonl` for piping — which means you can write a skill[4] that teaches the agent to triage incidents by running `logchef query` and `logchef sql` in a structured investigation workflow (count → group → sample → pivot on trace_id).

The interesting bit is this ends up very similar to what OP describes — an agent that iteratively queries logs to narrow down root cause — except it's composable pieces you self-host rather than an integrated product.

[1] https://github.com/mr-karan/logchef

[2] https://github.com/mr-karan/logchef-mcp

[3] https://logchef.app/integration/cli/

[4] https://github.com/mr-karan/logchef/tree/main/.agents/skills...

testbjjl|2 days ago

> SQL is the best exploratory interface for LLMs

Any qualifiers here from your experience or documentation?

p0w3n3d|2 days ago

That's in the contrary to my experience. Logs contain a lot of noise and unnecessary information, especially Java, hence best is to prepare them before feeding them to LLM. Not speaking about wasted tokens too...

shad42|2 days ago

LLMs are better now at pulling the context (as opposed to feeding everything you can inside the prompt). So you can expose enough query primitives to the LLM so it's able to filter out the noise.

I don't think implementing filtering on log ingestion is the right approach, because you don't know what is noise at this stage. We spent more time on thinking about the schema and indexes to make sure complex queries perform at scale.

tehjoker|2 days ago

Interesting article, but there's no rate of investigation success quoted. The engineering is interested, but it's hard to know if there was any point without some kind of measure of the usefulness.

shad42|2 days ago

We did not want to make the post engineering-focused, but we have 18 companies in production today (we wrote about PostHog in the blog). At some point we should post some case studies. The metric we track for usefulness is our monthly revenue :)

iririririr|2 days ago

am i reading correctly that the compression is just a relational records? i.e. omit the pr title, just point to it?

aluzzardi|2 days ago

There are 2 layers of compression:

- ZSTD (actual data compression)

- De-duplication (i.e. what you're saying)

Although AFAIK it's not "just point to it" but rather storing sorted data and being able to say "the next 2M rows have the same PR Title"

_boffin_|2 days ago

Excited to go through this!

_boffin_|7 minutes ago

Eh.. not what i was hoping for.

dbreunig|2 days ago

Check out “Recursive Language Models”, or RLMs.

I believe this method works well because it turns a long context problem (hard for LLMs) into a coding and reasoning problem (much better!). You’re leveraging the last 18 months of coding RL by changing you scaffold.

koakuma-chan|2 days ago

This seems really weird to me. Isn't that just using LLMs in a specific way? Why come up with a new name "RLM" instead of saying "LLM"? Nothing changes about the model.

truth_seeker|2 days ago

Even if TOP 250 npm packages are refactored through AI coding agent from security, performance and user friendly API point of view, the whole JS ecosystem will be in different shape.

Same is applicable for other language community, of course

lofaszvanitt|1 day ago

Slapping lipstick on a monkey kinda article.

whoami4041|2 days ago

"LLMs are good at SQL" is quite the assertion. My experience with LLM generated SQL in OLTP and OLAP platforms has been a mixed bag. IMO analytics/SQL will always be a space that needs a significant weight of human input and judgement in generating. Probably always will be due to the critical business decisions that can be made from the insights.

shad42|2 days ago

What we learned while building this is every token matters in the context, we spend lot of time watching logs of agent sessions, changing the tool params, errors returned by tools, agent prompts, etc...

We noticed for example the importance of letting the model pull from the context, instead of pushing lots of data in the prompt. We have a "complex" error reporting because we have to differentiate between real non-retryable errors and errors that teach the model to retry differently. It changes the model behavior completely.

Also I agree with "significant weight of human input and judgement", we spent lots of time optimizing the index and thinking about how to organize data so queries perform at scale. Claude wasn't very helpful there.

blharr|2 days ago

"LLMs are good at [task I'm not good enough at to tell the LLM is bad at]" is becoming common

dylan604|2 days ago

> IMO analytics/SQL will always be a space that needs a significant weight of human input and judgement in generating.

Isn't that precisely what is done when prompting?

aluzzardi|2 days ago

> My experience with LLM generated SQL in OLTP and OLAP platforms has been a mixed bag

Models are evolving fast. If your experience is older than a few months, I encourage you to try again.

I mean this with the best intentions: it's seriously mind boggling. We started doing this with Sonnet 4.0 and the relevance was okay at best. Then in September we shifted to Sonnet 4.5 and it's been night and day.

Every single model released since then (Opus 4.5, 4.6) has meaningfully improved the quality of results

kikki|2 days ago

Unrelated; what does "mendral" mean? It's a very... unmemorable word

shad42|2 days ago

I am sure you heard before: there are only two hard things in CS: cache invalidation and naming things.

In the history of this company, I can honestly say that this SQL/LLM thing wasn't the hardest :)

Noumenon72|2 days ago

Google says a shaft or spindle on a lathe, to which work is fixed while being turned. They could probably make up a story about "we're the center point that lets your LLM work" or something.

yellow_lead|2 days ago

Why the editorialization of the title? "LLMs Are Good at SQL. We Gave Ours Terabytes of CI Logs."

dang|2 days ago

I don't think we (mods) did that one, but I do like it, because the original title would provoke many comments reacting only to the "LLMs are good at SQL" claim in the title, reducing discussion of the actual post. The comments do have some of this, but it would be worse if that bit were also in the title.

(In that way you can see the title edit as conforming to the HN guideline: ""Please use the original title, unless it is misleading or linkbait; don't editorialize."" under the "linkbait" umbrella. - https://news.ycombinator.com/newsguidelines.html)

hal9000xbot|2 days ago

[deleted]

emp17344|2 days ago

I looked through this users comment history. This is pretty obviously a bot.

TheRealPomax|2 days ago

Title tells us nothing: what's the tl;dr?