And I'm calling that aspect of the article nonsense. The cost of preemption is incurred only infrequently, so even if it perturbs the point after it measures slightly, it only does so infrequently and the point at which it measures (assuming it has not been affected by the prior sample) more accurately represents a system without any instrumentation.
Assuming each invocation has a unique stack trace, each call site can still also effectively be tracked through sampling. Looking at his examples, this seems reasonably likely, as they all have quite different behaviours.
What tracing does do is permit a clear sequential analysis of an arbitrary granularity of macro behaviours.
If the macro behaviours are chosen with sufficiently low resolution that they are large enough for instrumentation to be an immeasurable overhead, then you obviously get a very clear and accurate picture of the system behaviour at that reduced resolution.
Tracing RPC calls and other similar behaviours as done in the blog are a good example, but it isn't down to increased resolution; quite the opposite.
Assuming each invocation has a unique stack trace, each call site can still also effectively be tracked through sampling. Looking at his examples, this seems reasonably likely, as they all have quite different behaviours.
What tracing does do is permit a clear sequential analysis of an arbitrary granularity of macro behaviours.
If the macro behaviours are chosen with sufficiently low resolution that they are large enough for instrumentation to be an immeasurable overhead, then you obviously get a very clear and accurate picture of the system behaviour at that reduced resolution.
Tracing RPC calls and other similar behaviours as done in the blog are a good example, but it isn't down to increased resolution; quite the opposite.