I don't think that's the problem; I think that's the one thing that's not a problem. Monotonic time is not that hard. The hardest thing about it is just convincing programmers to use it, and making sure your libraries use it properly. I don't want to say it's trivial, but it's not that hard once both you and your code base internalize "convert everything to internal UTC representation as quickly as possible, convert to local time as late as possible on the way out".
It's everything else that's a problem. It's the places where you input the Japanese time, and there's an era that the code has never heard of for some old system. There is no way for that system to even conceivably handle it without some sort of update, at least a database update. They're talking about creating a new character for the era, which literally no system on Earth can currently use because it doesn't even exist yet, and, again, anything old that can't be updated can't conceivably display it correctly. They might be able to display it as the individual characters, but they still won't know that's a date, or that they have to. As mipmap04 says, it's where Japanese dates were stored as a string or something, because goodness knows I've seen enough US dates stored as strings.
And inferring from the article that the tax authority is going to extend the old era, I assume that at the very least a good chunk of the government works on Japanese dates, so it's going to be important not to, say, print tax forms that have � in the date, etc.
UTC is good but not enough; it may still bite you in the behind.
What's the time difference between these 2 timestamps:
2016-12-31 23:59:50 UTC (unix time 1483232390)
2017-01-01 00:00:10 UTC (unix time 1483232410)
That's right: 21 seconds, due to the added leap second 23:59:60.
So then your next option is to use TAI (international atomic time). Of course for most applications, this extra/missing leap second doesn't matter.
And that's the crux of the issue. Given a level of abstraction, we're comfortable with a sufficient level of accuracy because it's the most convenient for 99% of the use cases. It takes a widely adopted tooling to bump the abstraction to a more accurate representation. In UNIX land, going from YYYYMMDDHHMMSS to a number of seconds has been a huge improvement. It's comfortable to use because all environments have the tooling to go back and forth from seconds to human-readable dates.
Try to use TAI and you start feeling a lot more lonely (ever tried to read logs timestamped in TAI? fun!) Most OS and server processes are used with the tradeoff that it's not a big deal if we have 2 events spaced a second apart and both timestamped the same (1483232399), since it's not that frequent.
Correct. POSIX time is defined as number of seconds per (non-leapsecond) day times number of complete day since the epoch, plus the number of seconds in the current day.
POSIX timestamps get weird around leap seconds. Time stamps get repeated, subtraction returns incorrect results, and so on.
In this segment, the Java Time-Scale is identical to UTC-SLS. This is identical to UTC on days that do not have a leap second. On days that do have a leap second, the leap second is spread equally over the last 1000 seconds of the day, maintaining the appearance of exactly 86400 seconds per day.
Java apps don't use UNIX time, they use something very similar called the Java timeline which is basically the same as UTC but leap seconds are smeared.
The java.time API is really incredibly thorough. It does of course have support for Japanese dates:
IIRC google and the like solve this using a special time routine in their kernels that literally just spread the second out across the day. It's close enough.. and legal will still accept it for auditing requirements.
I should have qualified that with an exclusion for recurring and future events. Identifying present and past times, which is the majority case, is not that difficult. Future times is one of those cases where the availability heuristic fools you; you can readily think of cases where it is a concern, but it is the clear minority of things that have to deal with time.
> I don't want to say it's trivial, but it's not that hard once both you and your code base internalize "convert everything to internal UTC representation as quickly as possible, convert to local time as late as possible on the way out".
It's really not that easy. Recurrences and durations are the killers.
When you schedule that meeting for 4:00PM every Tuesday, what happens when daylight savings time kicks in?
It's everything else that's a problem. It's the places where you input the Japanese time, and there's an era that the code has never heard of for some old system. There is no way for that system to even conceivably handle it without some sort of update, at least a database update. They're talking about creating a new character for the era, which literally no system on Earth can currently use because it doesn't even exist yet, and, again, anything old that can't be updated can't conceivably display it correctly. They might be able to display it as the individual characters, but they still won't know that's a date, or that they have to. As mipmap04 says, it's where Japanese dates were stored as a string or something, because goodness knows I've seen enough US dates stored as strings.
And inferring from the article that the tax authority is going to extend the old era, I assume that at the very least a good chunk of the government works on Japanese dates, so it's going to be important not to, say, print tax forms that have � in the date, etc.