Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Saying this is actually the only same thing to do.

I personally will not care for sub 200 microseconds and think it was a good article if read critically. I think it does describe why you should not do that at the moment if you have lots of nodes that need to sync consistently.

Having a shared 10Mhz reference clock is great and that gives you a pretty good consistent beat. I never managed to sync other physical sensors to that so the technical gotchas are too much for me.

There is madness in time.

Edit: changing some orders of magnitude honestly I feel happy if my systems are at 10ms.



In my opinion when you want such precision, you need to stablish strict constraints to the measurements, for example memory fences: https://www.kernel.org/doc/Documentation/memory-barriers.txt

If you do not do this, the times will never be consistent.

The author produced a faulty benchmark.


What benchmark? The only numbers he's measuring himself are on the oscilloscope. Everything else is being measured by chrony. Unless you're talking about a different post on the blog?


He uses Chrony, which uses system time, and compares those times across different machines. Unless proper setup is done the benchmark is faulty.


Chrony is what's comparing the times. Zero code written by the author is running except to log the statistics chrony created. Are you accusing chrony of failing at one of its core purposes, comparing clocks between computers? What could the author do differently, assuming the author isn't expected to alter chrony's code?


If those times are produced on different architectures, then yes, the comparison can never be accurate enough since the underlying measurement mechanisms differ fundamentally. While the author goes to great lengths to demonstrate very small time differences, I believe the foundation of their comparison is already flawed from the start. I do not want to generate any polemic sorry!


But you do or don't think chrony knows how to do the memory barriers and everything else properly?

Making the sync work across existing heterogenous hardware is the goal of the exercise. That can't be a disqualifier.


Chrony is an incredible piece of software, but I am afraid that in this case it is used incorrectly, unless more details are provided. Benchmarking across threadripper AMD, raspberrypi arm, and LeoNTP servers cannot be done lightly. I do not see how going down to the nanosecond scale can be done in this way, without more details, sorry. This is only my opinion and I know it is not very useful here.

Even the author acknowledges that when they say:

It’s easy to fire up Chrony against a local GPS-backed time source and see it claim to be within X nanoseconds of GPS, but it’s tricky to figure out if Chrony is right or not.


The thing is, any offset or jitter that can't be picked up over the network is irrelevant to what the author is trying to accomplish. And if it can be picked up over the network, I don't see why Chrony is the wrong thing to measure with.

"system time as distorted by getting it to/from the network" is exactly what is supposed to be measured here.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: