I was excited when I first heard about Server Timing, until I realised it was limited to a single duration value. Being able to model an actual timeline would make it significantly more useful.
The screenshot shows a breakdown of timing by component (Middleware, router, view, etc). Or perhaps I don't quite understand what you mean by "model an actual timeline"?
As a side note, we've been using miniprofiler[0] for a long while now, and it has some limitations, but can be very useful some times.
You should really just instrument your application with metrics for a time series database like Prometheus, Influx, or OpenTSDB. You'll find a mature set of tools and methodology. Server Timing looks very naive.
This gem seems like a good way to get a full app view while remaining in the browser. The only thing I might add would be to have an example turning on these metrics on a per request basis (via a parameter) so that you could check performance for users other than admin users.
Aside: I've used scout on a production application and it is similar in quality to new relic but far simpler to understand.
Seems to be using the `Server-Timing` header — are there docs somewhere on what this expects & what features it supports? Are other browsers likely to follow it?
Regarding browser support, I believe the standard is just a set of HTTP headers. So all browsers support it. It's just a matter of instrumenting it in each browser's respective developer tools.
The server timing metrics here are actually extracted from an APM tracing tool (Scout).
Tracing services generally do not give immediate feedback on the timing breakdown of a web request. At worst, the metrics are heavily aggregated. At best, you'll need to wait a couple of minutes for a trace.
The Server Timing API (which is how this works) give immediate performance information, shortening the feedback loop and allowing you to do a quick gut-check on a slow request before jumping to your tracing tool.
[+] [-] umaar|8 years ago|reply
[+] [-] andrewingram|8 years ago|reply
[+] [-] gingerlime|8 years ago|reply
As a side note, we've been using miniprofiler[0] for a long while now, and it has some limitations, but can be very useful some times.
[0] https://github.com/MiniProfiler/rack-mini-profiler
[+] [-] jzelinskie|8 years ago|reply
[+] [-] mooreds|8 years ago|reply
Aside: I've used scout on a production application and it is similar in quality to new relic but far simpler to understand.
[+] [-] cagmz|8 years ago|reply
Is there an automated way of getting the average of a performance metric (eg Time spent in AR) over N requests?
[+] [-] itsderek23|8 years ago|reply
Small world!
> Is there an automated way of getting the average of a performance metric (eg Time spent in AR) over N requests?
I'm assuming you mean w/Chrome dev tools + server timing?
Not that I'm aware of...DevTools is an area I'd like to explore more though.
[+] [-] amatix|8 years ago|reply
[+] [-] mitchellh|8 years ago|reply
[+] [-] kaycebasques|8 years ago|reply
[+] [-] thirdsun|8 years ago|reply
[+] [-] jacobevelyn|8 years ago|reply
[+] [-] iampims|8 years ago|reply
[+] [-] hultner|8 years ago|reply
Anyone know of a similar tool for Python3/flask (or wsgi probably)?
[+] [-] Thaxll|8 years ago|reply
[+] [-] itsderek23|8 years ago|reply
The server timing metrics here are actually extracted from an APM tracing tool (Scout).
Tracing services generally do not give immediate feedback on the timing breakdown of a web request. At worst, the metrics are heavily aggregated. At best, you'll need to wait a couple of minutes for a trace.
The Server Timing API (which is how this works) give immediate performance information, shortening the feedback loop and allowing you to do a quick gut-check on a slow request before jumping to your tracing tool.