Hacker News new | past | comments | ask | show | jobs | submit login

I've always wondered why system clocks are set to UTC instead of TAI[1]. To me, it makes more sense for OSes to ship UTC as a time zone. Leap seconds would then be tzinfo updates, just like when countries change their daylight saving time. System clocks still wouldn't be guaranteed to be monotonically increasing, but at least there wouldn't be minutes with 61 seconds.

1. http://en.wikipedia.org/wiki/International_Atomic_Time




The system clocks on the hardware level are extremely inaccurate so genuinely it doesn't matter. All the problems that happen around the leap second are due to the poor "management" around it not that it's actually a big absolute difference from the reference atomic clocks.

Case in point: the Linux leap second kernel bug that made problems in 2012. Read the commit of the fix:

https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux....

"This patch tries to avoid the problem by reverting back to not using an hrtimer to inject leapseconds, and instead we handle the leapsecond processing in the second_overflow() function. The downside to this change is that on systems that support highres timers, the leap second processing will occur on a HZ tick boundary, (ie: ~1-10ms, depending on HZ) after the leap second instead of possibly sooner (~34us in my tests w/ x86_64 lapic)."

So the bug was the effect of the programmers more worrying that the leap second adjustment happens as fast as possible, in 34 microseconds, ignoring that the call from that particular point in code made the livelock.

And instead of pushing the change "as fast as possible" we see that both Google and AWS solve the problem by spreading the changes over the long periods of time. Which is the right approach generally for all automatic adjustments to the system clock -- avoiding discontinuities.


I agree. TAI is simpler and seems better suited to be the fundamental time counting mechanism at the OS level.

There was an interesting HN thread along these lines back when the current impending leap second was announced: https://news.ycombinator.com/item?id=8840440


DJB to the rescue, once again. http://cr.yp.to/libtai.html


This is certainly an interesting idea, but isn't TAI/UTC somewhat orthogonal to TZs? I'm in America/Los_Angeles, for example, and if I want my time in TAI, my local time would be ~35 seconds different from my local time in UTC, would it not?


Time zones are offsets in UTC, which change over time (DST, political changes, etc).

UTC is an offset from TAI, which changes over time (leap seconds).

The time zone files already keep track of historical changes[1].

Conceptually, they're pretty similar; the only difference is that leap seconds have a special clock value (23:59:60 instead of showing you 23:59:59 twice).

  1. https://en.wikipedia.org/wiki/Tz_database#Example_zone_and_rule_lines


I would imagine that some hardware RTCs would not support the :60 leap second value. In fact I cannot recall a single RTC that I have dealt with, that understands leap seconds.


Local time zones are offset from UTC, not TAI. If it's 19:45:00 in LA, UTC is 02:45:00 and TAI is 02:45:35. There's no such thing as a "TAI version" of your local time.


The solution is simple math, for computers to base their time on TAI, the time zone conversion changes from "UTC + local timezone offset", to "TAI + UTC offset + local timezone offset" and we reap the rewards of drastically simpler software at the core of our systems.

TAI is defined like UNIX time, as a notation of the progression of proper time. It is the primary reference by which we build all other times, UTC is a humanist overlay on TAI to maintain norms, since we need an approximate terrestrial solar time for sanity purposes.

If the math changes to TAI as the "base storage representation" for time stamps and reference time internally, then the math becomes immediately sane, since TAI can be relied on as a direct sequence of mathematically related linear time without lookup tables or other crap. Move the crap "up the stack" to where it doesn't cause issues like these we see every time things need a leap second.


The problem is that "the system clock" in the sense we have it now is actually "overloaded" with different expectations. From the hardware point of view, we have hugely inaccurate timers on the motherboards possibly drifting a lot all the time.

Then we have the signal from GPS, but typically only on the mobile phones, and some other signals on some other distribution mechanisms:

"GPS time was zero at 0h 6-Jan-1980 and since it is not perturbed by leap seconds GPS is now ahead of UTC by 16 seconds.

Loran-C, Long Range Navigation time. (..) zero at 0h 1-Jan-1958 and since it is not perturbed by leap seconds it is now ahead of UTC by 25 seconds.

TAI, Temps Atomique International (...) is currently ahead of UTC by 35 seconds. TAI is always ahead of GPS by 19 seconds. "

And we have NTP servers, which differ from one another all the time, and to which our computers connect and try to adjust what they report.

So the bugs are already just in how the adjustments are handled, not that the world can be made simpler.


Q: What value of TAI will be noon of July 4th, 2030 in New York?

A: Honestly, nobody knows.

You can estimate the number of leap seconds, but not know (much) in advance. Having (future) date representation chance occasionally does not lead to sanity either.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: