top | item 43816228

(no title)

nchinmay | 10 months ago

We had this very issue where a bad configuration change (human error) caused a large & sudden revenue drop and a drop in our streaming ad event metrics. This is a realtime adtech system where a delay in detecting sudden changes in business metrics can have monetary impact and visible customer experience drops. In this case, the major drop in revenue was immediately found and addressed but not all of our expected alerts went off. Our streaming ad events metric threshold was statically set too low. This threshold was appropriate at earlier stages of our business but as our business has grown, this threshold happens to be too low to set off the alert I would have expected to go off as the very first one. We do have sophisticated metrics instrumentation and alerting but an effective anomaly detection around sudden upticks/downticks in business metrics while being conscious of underlying metric trends evolving organically with the business would be a game changer.

Larger, incident-worthy changes in metrics are also easier to set static thresholds around and ring more than one bell when they occur. I'd be more concerned about smaller to mid deviations from the trend, say, sudden -/+10% change in my business metrics over X minutes. Can I reliably set a static threshold that will universally be appropriate here? A good anomaly detector would ideally bring something like this to attention without hard coded alert configs here

discuss

order

No comments yet.