I was excited when I first heard about Server Timing, until I realised it was limited to a single duration value. Being able to model an actual timeline would make it significantly more useful.
The screenshot shows a breakdown of timing by component (Middleware, router, view, etc). Or perhaps I don't quite understand what you mean by "model an actual timeline"?
As a side note, we've been using miniprofiler[0] for a long while now, and it has some limitations, but can be very useful some times.
I think I get what OP is saying. The screenshot [1] shows you the aggregate time of each arbitrary component, but it's not mapped out along a time series, like you see in the Network panel's Waterfall or in a Performance recording.
I'll see if there's anything we can do on our side (I'm the DevTools tech writer), but I think it's an API limitation. We might be able to use some arbitrary convention in the timing values that let's us plot along a time series...
> To minimize the HTTP overhead the provided names and descriptions should be kept as short as possible - e.g. use abbreviations and omit optional values where possible.
I could see significant issues if we tried to send data in timeline fashion (such as creating a metric for each database record call in an N+1 scenario).
One idea: pass down an URI (ie - https://scoutapp.com/r/ID) that when clicked, provides full trace information.
You should really just instrument your application with metrics for a time series database like Prometheus, Influx, or OpenTSDB. You'll find a mature set of tools and methodology. Server Timing looks very naive.
Application instrumentation - whether via Prometheus, StatsD, Scout, New Relic - solves a very different problem than this. The server timing metrics here are actually extracted from an APM tool (Scout), so you get the best of both worlds.
With those tools, you do not get immediate feedback on the timing breakdown of a web request. At worst, the metrics are heavily aggregated. At best, you'll need to wait a couple of minutes for a trace.
Profiling tools that give immediate feedback on server-side production performance have their place, just like those that collect and aggregate metrics over time.
This gem seems like a good way to get a full app view while remaining in the browser. The only thing I might add would be to have an example turning on these metrics on a per request basis (via a parameter) so that you could check performance for users other than admin users.
Aside: I've used scout on a production application and it is similar in quality to new relic but far simpler to understand.
Seems to be using the `Server-Timing` header — are there docs somewhere on what this expects & what features it supports? Are other browsers likely to follow it?
Regarding browser support, I believe the standard is just a set of HTTP headers. So all browsers support it. It's just a matter of instrumenting it in each browser's respective developer tools.
The server timing metrics here are actually extracted from an APM tracing tool (Scout).
Tracing services generally do not give immediate feedback on the timing breakdown of a web request. At worst, the metrics are heavily aggregated. At best, you'll need to wait a couple of minutes for a trace.
The Server Timing API (which is how this works) give immediate performance information, shortening the feedback loop and allowing you to do a quick gut-check on a slow request before jumping to your tracing tool.