It's not always the right approach, but it's a good reminder to consider "do nothing" as the proper response to a weird edge case that is presented as a [potential] problem.
I am just a dumb user.
I use clockspeed (sntpclock, clockadd and clockview) with a short list of compatible servers I consider "reliable". As far as I can tell, it works.
Time needs vary widely for people. Some people need nanosecond precision. Probably most end-users wouldn't notice for quite a while if their system clock is off by literally years... I know I've done it before without noticing. However, the existence of the latter doesn't somehow invalidate the former... some people really do care quite deeply. If you are in the latter, it hardly much matters what you do. "Manually setting it off of my cell phone" or "ignoring it entirely" are mostly valid options.
I'm not very familiar with ntpd, but this article doesn't cover what would happen if the half hour polling just happens to poll right during the leap second. If the code receives "59 minutes and 60 seconds" but doesn't expect that to be possible, I could imagine that being a problem.
I don't think it would see this. It would see "59 minutes and 59 seconds" and either see a one-second discrepancy with the local system clock on that poll, or on the next one.
UTC inserts the leap second by extending the last minute before midnight by one second and does indeed specify that this ought to be shown as, for example, "59 minutes, 60 seconds, 123 milliseconds."
NTP doesn't seem to use the human-readable format for its timestamps though. My 30 second skim says it uses a variation on epoch time that extends from Jan 1 1900 rather than Jan 1 1970. UNIX epoch time inserts the leap second by waiting until 1 second after midnight and then jumping the counter back 1 second (which corresponds to a long 0th second rather than a long 59th second, but whatever, in spirit you are right) [1].
I see two potential sources for error: a 1s correction/drift proving to be "too much" for synchronization in a system that typically sees corrections or drifts far smaller, or 2. a well-meaning fool writes assert(s<=59) in some code that touches a correctly-handled UTC stamp during the leap-second :-)
I read yesterday that the leap second was handled by repeating the last second of the day. So you would see 59 seconds for two seconds but never 60 seconds. I guess different systems and time standards may handle it differently.
I'm sure that's how some systems implement it and it's close enough to the truth that I'm sure it will frequently be reported that way in the news. But I'm also quite sure that the actual standards are as I said they were. See the wiki link for a detailed timeline.
Finally, if you are one of the exceedingly
few people for whom the clock being off by
a second actually matters, then I'm pretty
sure you also know how to deal with it.
One alternative to NTP is PTP.[1] Quoting:
IEEE 1588 is designed to fill a niche not well
served by either of the two dominant protocols,
NTP and GPS. IEEE 1588 is designed for local
systems requiring accuracies beyond those
attainable using NTP.
There is a point to be made that implementing it correctly, would involve extra code, for what is essentially an edge case. That extra code could prove to be a larger security concern that being 1 second of once in a while.
Your clock is almost certain to be a little of every now and again anyway. If it wasn't there would be no point in running ntp.
I'm pretty sure OpenBSD core doesn't have any of those, and that's really what the project is caring about. Problems may happen in ports, but there's a reason why they're ports.
Also, as stated in the article:
> Finally, if you are one of the exceedingly few people for whom the clock being off by a second actually matters, then I'm pretty sure you also know how to deal with it.
I would think that waveform capture on the electric grid would be impacted when doing timeseries data logging and use in subsequent analysis. Frequently these will be relative time snapshots or using a different time basis so not sure if they are actually impacted.
The only time people would care that I'm aware of would be if the grid drooped at that exact time and one was trying to do analysis around that. Otherwise they are just doing things real time as they happen and time tracking is not necessarily involved except for logging.
Anyway I frequently run into timeseries trending clients that cannot handle displaying daylight saving time transitions correctly let alone hope that they handle leap seconds correctly.
Shouldn't `ntp`, over a short period of time, simply lengthen the actual amount of wall clock time per 'second' exposed to the kernel/userland? So there should never be a huge delta of 1s, nor should time ever go in reverse.