top | item 47055028

(no title)

nojs | 12 days ago

> Especially curious about your current workflows when you receive an alert from any of these channels like Sentry (error tracking), Datadog (APM), or user feedback.

I have a github action that runs hourly. It pulls new issues from sentry, grabs as much json as it can from the API, and pipes it into claude. Claude is instructed to either make a PR, an issue, or add more logging data if it’s insufficient to diagnose.

I would say 30% of the PRs i can merge, the remainder the LLM has applied a bandaid fix without digging deep enough into the root cause.

Also the volume of sentry alerts is high, and the issues being fixed are often unimportant, so it tends to create a lot of “busy work”.

discuss

order

Dimittri|12 days ago

To avoid this 'busy work', we group alerts by RCA (so no duplicate PRs) and filter by severity (so no PRs for false positives or not-that-important issue). We realized early on that turning every alert into a PR just moves the problem from Sentry to GitHub, which defeats the purpose.

Is having a one-hour cron job enough to ensure the product’s health? do you receive alerts by email/slack/other for specific one or when a PR is created?

nojs|12 days ago

interesting. yeah the only reason it’s on cron is because the sentry-github integration didnt work for this (can’t remember why), and i didnt want to maintain another webhook.

the timing is not a huge issue though because the type of bugs being caught at this stage are rarely so critical they need to fixed in less time than that - and the bandwidth is limited by someone reviewing the PR anyway.

the other issue is crazy token wastage, which gets expensive. my gut instinct re triaging is that i want to do it myself in the prompt - but if it prevents noise before reaching claude it may be useful for some folks just for the token savings.

no, I don’t receive alerts because i’m looking at the PR/issues list all day anyway, it would just be noise.