The sad truth of the world is that in many cases latency isn't the most critical aspect for tracking. We absolutely do track it because we have the expectation that authentication requests complete. But there are many moving parts to this that make reliable tracking not entirely feasible:
* end location of user
* end location of customer service
* third party login components (login with google, et al)
* corporate identity providers
* webauthn
* customer specific login mechanism workflows
* custom integrations for those login mechanisms
* user's user agent
* internet connectivityAll of those significantly influence the response capability in a way which makes tracking latency next to useless. Maybe there is something we can be doing though. In more than a couple scenarios we do have tracking in place, metrics, and alerting, it just doesn't end up in our SLA.
scottlamb|3 months ago
The same can apply to latency. What is the latency of requests to your system—including dependencies you choose, excluding dependencies the customer chooses. The network leg from the customer or user to your system is a bit of a gray area. The simplest thing to do is measure each request's latency from the point of view of your backend rather than the initiator. This is probably good enough, although in theory it lets you off the hook a bit too easily—to some extent you can choose whether you run near the initiator or not and how many round trips are required, and servers can underestimate their own latency or entirely miss requests during failures. But it's not fair to fail your SLA because of end-user bufferbloat or bad wifi or a crappy ancient Chromebook with too many open tabs or customer webapp server's GC spiral or whatever. Basically impossible to make any 99.999% promises when those things are in play.
My preferred form of SLO is: x% of requests given y ms succeed within y ms, measured by my server. ("given" meaning "does not have an upfront timeout shorter than" and "isn't aborted by the client before".) I might offer a few such guarantees for a particular request type, e.g.:
* 50% of lookups given 1 ms succeed within 1 ms.
* 99% of lookups given 10 ms succeed within 10 ms.
* 99.999% of lookups given 500 ms succeed within 500 ms.
I like to also have client-side and whole-flow measurements but I'm much more cautious about promising anything about them.
bostik|3 months ago
Funnily enough, looking through your itemisation I spot two groups that would each benefit from their own kinds of latency monitoring. End location and internet connectivity of the client go into the first. Third-party providers go into the second.
For the first, you'd need to have your own probes reporting from the most actively used networks and locations around the world - that would give you a view into the round-trip latency per major network path. For the second, you'd want to track the time spent between the steps that you control - which in turn would give you a good view into the latency-inducing behaviour of the different third-party providers. Neither are SLA material but they certainly would be useful during enterprise contract negotiations. (Shooting impossible demands down by showing hard data tends to fend off even the most obstinate objections.)
User-agent and bespoke integrations/workflows are entirely out of your hands, and I agree it's useless to try to measure latency for them specifically.
Disclaimer: I have worked with systems where the internal authX roundtrip has to complete within 1ms, and the corresponding client-facing side has to complete its response within 3ms.