Hacker News new | past | comments | ask | show | jobs | submit login

This is about the "pit of success", being correct and predictable by default. A difference of timestamps not returning the corresponding elapsed wall-time is _very_ surprising.

I want to be able to compute durations between timestamps stored in the DB, received from API calls or retrieved from the system and get the right duration "out of the box". Computing these durations lets me apply business logic relying on it. A message can be editable for x amount of time, a token is valid for y amount of time, a sanction expires after z amount of time, etc.

For example, I want to issue some token valid for 60s. What should I set the expiry time to? `now + 60s`. Except if `now` is 2016-12-31T23:59:30Z, then most libs will return a time 61s in the future.

1 second may not be a big error, but it's still an error and depending on context it may be relevant. This is a systematic error unrelated to time sync / precision concerns, so it's pretty frustrating to see it being so common. It seems though that we won't have any new leap seconds in the near future so eventually it will just become a curiosity from the past and we'll be stuck with a constant offset between UNIX and TAI.

> I feel like that's well served by specialized libraries.

Agreed that you need a specialized lib for this, but my point is that _you shouldn't have to_ and the current situation is a failure of software engineering. Computing `t2 - t1` in a model assuming global synchronized time should not be hard. I don't mean it as a personal critique, this is not an easy problem solve since UNIX timestamps are baked almost everywhere. It's just disappointing that we still have to deal with this.




What I'm not clear on though is what the failure mode is in your scenario. What happens when it's wrong? Does something bad happen? If something is one second longer or shorter than what it ought to be on very rare occasions, then what does wrong? I would, for example, imagine that the business use case of "editable for x amount of time" would be perfectly fine with that being plus-or-minus 1 second. It's not just about correctness, it's about the error and what it means.

A few months ago, Jiff did have leap second support. It worked. I know how to do it, but none of the arguments in its favor seem to justify its complexity. Especially when specialized libraries exist for it. You can't look at this in a vacuum. By make a general purpose datetime library more complex, you risk the introduction of new and different types of errors by users of the library that could be much worse than the errors introduced by missing leap second support.


> You can't look at this in a vacuum. By make a general purpose datetime library more complex, you risk the introduction of new and different types of errors by users of the library that could be much worse than the errors introduced by missing leap second support.

Agreed, I can easily imagine that it could cause situation where some numeric value is not interpreted correctly and it causes a constant offset of 37 seconds. UNIX timestamps are entrenched, so deviating from it introduces misuse risks.

Regarding my use-cases, I agree that these ones should still work fine. I could also come up with issues where a 1s error is more meaningful, but they would be artificial. The main problem I can see is using some absolute timestamp instead of a more precise timer in a higher frequency context.

Overall, it's the general discussion about correctness VS "good enough". I consider that the extra complexity in a lib is warranted if it means less edge cases.


> Overall, it's the general discussion about correctness VS "good enough". I consider that the extra complexity in a lib is warranted if it means less edge cases.

Yeah I just tend to have a very expansive view of this notion. I live by "all models are wrong, but some are useful." A Jiff timestamp is _wrong_. Dead wrong. And it's a total lie. Because it is _not_ a precise instant in time. It is actually a reference to a _range_ of time covered by 1,000 picoseconds. So when someone tells me, "but it's not correct,"[1] this doesn't actually have a compelling effect on me. Because from where I'm standing, everything is incorrect. Most of the time, it's not about a binary correct-or-incorrect, but a tolerance of thresholds. And that is a much more nuanced thing!

[1]: I try hard not to be a pedant. Context is everything and sometimes it's very clear what message is being communicated. But in this context, the actual ramifications of incorrectness really matters here, because it gets to the heart of whether they should be supported or not.


I think that you have made the right decision.

One of the most common and high-impact failure mode is crashing when parsing a leap second that is represented as "23:59:60".

Jiff is able to parse leap seconds, and beyond that, I doubt that there are many scenarios where you care about leap seconds but not enough to use a library that supports them.


Eh, is this ideal with chasing when it’s all fucked anyway when we start shipping code to Mars?

https://en.m.wikipedia.org/wiki/Barycentric_Coordinate_Time




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: