top | item 47057052

(no title)

Dimittri | 12 days ago

totally get the 'token wastage' point—sending noise to an LLM is literally burning money.

but an other maybe bigger cost might be your time reviewing those 'bandaid fixes.' if you're merging only 30%, that means you're spending 70% of your review bandwidth on PRs that shouldn't exist right?

we deduplicate before the claude analysis with the alert context and after based on the rca so we ensure we have no noise in the PRs you have to review

why don't you trust an agent to triage alerts+issues?

discuss

order

nojs|12 days ago

Yeah. what I find in practice is that since the majority of these PRs require manual intervention (even if minor, like a single follow up prompt), it's not significantly better than just hammering them all out in one session myself a few times per week, and giving it my full attention for that period of time.

The exception is when a fix is a) trivial or b) affecting a real user and therefore needs to be fixed quickly, in which case the current workflow is useful. But yeah, the real step-change was having Claude hitting the Sentry APIs directly and getting the info it needs, whether async or not.

I'd also imagine that people's experiences with this vary a lot depending on the size and stage of the company - our focus is developing new features quickly rather than maintaining a 100% available critical production service, for example.

Dimittri|12 days ago

Interesting. it makes sense that it depends on the number of alerts you receive. but I’d think that if 70% of the PRs you receive are noise, an AI triager could be useful—if you give it the context it needs based on your best practices. I’m very curious about the kinds of manual intervention you do on PRs when one is required. What does the follow-up prompt look like? Is it because the fix was bad, because the RCA itself was wrong, or because of something else?