Hacker News new | past | comments | ask | show | jobs | submit login
Working with time in Postgres (craigkerstiens.com)
343 points by neovintage on June 8, 2017 | hide | past | favorite | 120 comments



You cannot have a post entitled "Working with time in Postgres" and fail to mention Range Types[1]!

If you're using Postgres right now and have any columns like start_* and end_* for anything (e.g numbers or dates), you need to stop what you are doing and use a range type. They are amazing. You can do things like a unique index that ensures there are no overlapping ranges, you can do efficient inclusion/exclusion indexing and much more.

Use them. I'm always surprised more people don't know about them.

1. https://www.postgresql.org/docs/9.6/static/rangetypes.html


Yes! Range types should absolutely be in here. I had them as one of the top items in a recent post so thought it'd be a bit repetitive[1], but in retrospect, it should absolutely be here as well.

[1] http://www.craigkerstiens.com/2017/04/30/why-postgres-five-y...


All you have to is just self-cite and say you won't belabor the point any further. ^_^


They're are awesome and Django's ORM has built in support for it.

https://docs.djangoproject.com/en/1.11/ref/contrib/postgres/...


I idled here expecting something on time series data. The article whilst useful is incredibly thin, a one page blog post, which is fine but I'm not sure why it has lasted very long here? But thanks for that also, perhaps the thread will beget something more comprehensive. (Not volunteering)


I'm so sad range types are unsupported in AWS Redshift. :(


Practically speaking, how would you use these for noting starts and ends of long-running jobs, say?

Would you set the interval starting time, but leave the end of the interval as "present"/infinity? And then update the end of the interval when the job finished? Wouldn't you also need to have a cleanup function to manually "close" intervals if the worker crashed and restarted?


I'm not sure to be honest, I would set the end as infinity I think.

I wouldn't have the worker process handle this itself though, as you would need some form of cleanup. But you'd need the same with two individual columns


Maybe a chosen value such as "2100-01-01 00:00" could work.


Do you know if they can be combined with the timetravel extension?



Yeah. Would be really cool to combine both fields in one to get a cleaner schema.


"I was in favour of space exploration until I realised what it'd mean for date time libraries" -- Joe Wright

https://twitter.com/joe_jag/status/510048646482894848?lang=e...


> Postgres has two types of timestamps. It has a generic timestamp and one with timezone embedded in it.

That's not correct, timestamptz doesn't have a timezone embedded in it. It's just that it's timezone-aware. A timestamptz corresponds to a universal point in time that have many human reprensentations, one for each timezone. psql uses the default timezone of the postgres instance to convert a timestamptz to a displayable string, so timestamptz are always displayed with a timezone, but that info does not come from the stored value.

Timestamptz needs timezone information only for operations that would give different results in different timezones, e.g. display as string, extract the day part, add a 1-month interval (DST info needed), etc. Comparing two timestamptz however doesn't require any timezone info.

The difference between timestamp and timestamptz is not about what they store, but about how they behave.

Edit: In my experience, this is not always obvious because postgres uses the default timezone of the instance whenever it needs such info with timestamptz operations. Using an explicit timezone often requires convoluted code.


Thanks. If I store all datetimes from my app in UTC, with end users in more than one timezone, which type should I use?


Well I would use timestamptz, using user's timezone only to convert for display. Use cases for timestamp are very limited.

Just make sure you include a timezone info in string representations in your SQL queries. For example '2000-01-01T00:00:00Z' where Z stands for UTC. Otherwise that would insert a timestamp into a timestamptz column, in which case postgres uses local timezone setting for conversion, implicitly; this is not what you want.

See http://phili.pe/posts/timestamps-and-time-zones-in-postgresq...

Also you should use an equivalent type in you app, i.e. python datetime with tzinfo or JS Date. And beware of UTC offsets: they can't handle DST. Python pytz and JS moment-timezone provide DST-aware timezone info (which is built-in in postgres).

Edit: if you can rely on your users system time for display that's even better because you wouldn't have to explicitly deal with those DST-aware timezone info.


Every time I deviate from using unix milliseconds as my timestamps, I end up regretting it. If we use unix seconds, we get infinite bugs related to people forgetting to convert to millis when comparing against the current time. If we use Date objects, it's an even larger surface of potential bugs. Every Date interface I've ever seen makes it far too easy to accidentally create a relative time (i.e. anything that can't be mapped unambiguously to a single unix millis timestamp. Usually means a datetime that defaults to the current timezone). Does anyone have a preferred method that avoids these pitfalls?

At the end of the day I always come back to "solution with lots of possible bugs" or "unix millis everywhere". And I always choose the latter. It means we can't use nice date features in a lot of databases, but...eh? They've never seemed worth it.


I would say: Think carefully about what kind of 'time' you are trying to represent. Instants like unixtime are common in many problem domains, but there are plenty of situations where other choices are appropriate.

For example, I wrote an event registration marketplace some time ago. You might think "start time for event" would naturally fit as a unix timestamp, but it's a mistake. If you have an event at 10am in Las Vegas, moving it to Chicago shouldn't suddenly change the start time. And never store "all day" dates as a timestamp (ie datemidnight); timezone issues can easily produce off-by-one dates.

Basically, 'time' is not a single thing. You usually want to represent it the way your users think about it - and that isn't always like a unix timestamp (although it very often is).


Unix timestamps are bound to UTC[1], so it would never be correct to have a timestamp represent number of seconds since 1970 _somewhere_. You would have to convert the timezone for any location you want it to be relative to.

[1] https://en.m.wikipedia.org/wiki/Unix_time


Yes yes. But your user doesn't think in terms of seconds-since epoch. Your problem domain includes two pieces of information: a localdatetime and a location. 10am 12/22/17 in Las Vegas. They set those fields separately in pretty much every conceivable UI, and you store them separately in pretty much every conceivable database.

If you store your event time as epoch time, you're storing "time of event in UTC" and "location of event". Which means every change of location requires you to recalculate and update the epoch time of the event. When users change Las Vegas to Chicago, you must update two fields - and if you ever screw it up, you have no way of knowing what the user originally intended.

This is a very bad case for epoch time. You may want to create an index on the calculated epoch time for queries, but store the datetime as localdatetime - the form that most accurately represents the source data.

You are of course free to ignore this advice and learn the hard way. Just stay off my lawn.


> If you have an event at 10am in Las Vegas, moving it to Chicago shouldn't suddenly change the start time.

Does this come up a lot? Moving from Las Vegas to Chicago would involve much more than just being aware of the time zone change.


Maybe here's an example that's more useful: Some years ago, I worked on an in-house IaaS platform that included several workflow modules for the sysadmins (alert dispatching, time-tracking, etc.).

One of those components was a scheduler for one-time and recurring tasks. For example, when you delete a system, you set a timer for 14 days to have the system remind you (via e-mail or by issuing an alert into the queue) to delete the system's backup. There were also a lot of recurring tasks that needed to be performed daily or weekly. Now if you have a task thats configured daily at 9 AM, it's tempting to implement the timestamp series as

  while (true) { timestamp += 86400; }
And indeed, that's how it was done in the existing code. But that means that once DST starts or ends, your daily-at-9-AM-task suddenly happens at 8 AM or 10 AM instead. Whether that's a problem depends on the type of task and how the sysadmins organize their work. And then there's the monthly recurrence, which is even messier with plain timestamps.

I cannot recall all details anymore, but I definitely remember that twice a year, after each DST change, someone would go through the list of recurring tasks, and move the times forward (or backward) by one hour manually.

EDIT: Maybe the simplest (though not easiest) solution to the irregular month lengths would be to attach giant thrusters to Earth and push it away from the sun a bit, so that our year is 366 days instead of 365 long. Then we make a calendar with 6 months per 61 days. As a bonus, it would reverse some of the effects of global warming. (Alternatively, go to 368 days and have 16 months with 23 days each.)


Much simpler calendar has 364 = 13 * 28 days. 13 months of 28 days (EXACTLY 4 weeks) each. This one is so simple, it might be worth having 1-2 extra-calendar days per year, even with a 365.24... day year.


It depends on the application semantics, for example if I scheduled a Breakfast for 08:00 in Las Vegas, but then moved the event to Chicago, I wouldn't want the time to be auto-converted to 06:00, (or whatever is appropriate given DST), because that's a silly time to have breakfast.


It doesn't need to come up "a lot" for your application to feel buggy. It just needs to come up.

Your UI has a selector for Date/Time and a selector for Location. They might even be different screens. The Location may have been prefilled with a guess based on geoip. Or maybe they saved the wrong 'Springfield'.

The user took an action to change Location. If that has the side effect of changing Time, the user will be surprised.


In some sense, Postgres agrees with you, since the underlying storage for a timestamp is something morally equivalent to that. (Milliseconds from 4713 BC.)

However, you do need to do date arithmetic from time to time, whether using a wrapped epoch time in the database or in the application. "One day from now" turns out to be complicated enough that we delegate to libraries to get it right; and Postgres's implementation of these features is solid. When you want to `GROUP BY` day, for example, there are performance benefits to doing that on the database side -- and for analysts, there is often little alternative but to handle dates with DB provided functionality.

When it comes to date arithmetic, how do you handle that with UNIX timestamps?


The problem with Unix milliseconds is it's actually not always increasing thanks to leap seconds. A positive leap second will result in the fractional part of the last second of the day going up to .999..., then resetting to .000... over again.

https://en.wikipedia.org/wiki/Unix_time#Leap_seconds



I'm with Google on this one... Just smear the leap seconds out in the general case, and anything that needs to be within 500ms of UTC can be handled as a special case.


Standard python date library does not allow to do any sorts of operations between TZ-aware and TZ-unaware dates. You're expected to explicitly convert between the two.

Django postgresql adapter will aggressively show warnings for all of the cases where you're trying to insert a TZ-unaware date into a TZ-aware column.

Is this that hard to reproduce?


It sounds like what you really want is a wrapped "instant" class, which contains something more or less equivalent to a Unix timestamp inside it, but prevents you from accidentally interpreting it using the wrong epoch or units. That's what a Postgres timestamp is. Internally, it's a modified Julian date + nanoseconds since midnight UTC, or something like that. But you don't care if you don't work inside the Postgres source.

It is regrettable that APIs make it easy to create datetime objects without timezones, which is why sane people have moved to e.g. Joda Time and the libraries based on it.


When you talk about Date objects, do you mean datetime? Because a Date with no time compenent has tons and tons of real world uses that a unix time stamp would be inappropriate for.


Can you provide examples? It only seems useful to me for display purposes if you know you'll never need to worry about any specific local time zone (or if you will, then you know what time zone that is and know how to deal with it).


Just general calendar arithmetic. There are tons of cases where you need to do stuff like calculate "one month before", "one month after", "beginning of the year", ... --that's a mess with unix timestamps, but trivial with a good calendar library (or postgres, for that matter).


I agree with you (I don't use unix timestamps), but if you're using a good calendar library you can certainly get the result as a unix timestamp.


Well, yes, you can convert on input and/or output from/to unix timestamps, sure (and you can with postgres, too), but the point is that you have to convert first, you can't do date arithmetic on unix timestamps directly (because there is no fixed relation between unix timestamps and days).


I have had good success working with Postgres timestamp without time zone.


I don't have much of a preference for milliseconds or seconds, but yeah, Unix time stamps are nearly always the way to go.


I love Postgres' time handling, even more so whenever I have to handcraft time-based queries in other databases, like MongoDB (which is more often than I'd like).

Some things the author didn't mention that I like:

* Timestamp with time zone string parsing: '2013-06-27 13:15:00 US/Pacific'::timestamptz

* Timezone-aware to timezone-naive conversion (or vice versa): mytztime AT TIME ZONE 'US/Pacific'

* I haven't used tstzrange yet, but it looks pretty powerful.


Another tip, if you work with per-user custom timezones, then "SELECT some_date AT TIME ZONE %(users_timezone)s" is also sometimes useful and needed.

Normally you would want to receive timezone-aware timestamps from the database, and format them in user's timezone at display time--perhaps in a template. But, if you're e.g. aggregating data for a day-over-day or month-over-month report, then the conversion to naive dates need to happen on the database side, so that day boundaries and month boundaries would match the user's timezone.


I'm a bit of a n00b at date handling. What's the benefit to communicating with TZ-aware timestamps versus with TZ-less timestamps with the shared understanding that they're always at UTC? With the latter approach I can also convert to my local timestamp for display.


That the database can't do computations on what it's not aware of. If you want to ask the database for "events today", the database needs to know what span of time corresponds to "today" for the user.


I never see discussion of fiscal calendars in these sorts of threads and articles. Whenever I read these, all I see is something that would fall apart as soon as approximately half of my clients look at it.

We have found that it is more annoying to keep track of the behavior of the library and of a home-grown date dimension. In my organization, we tend to use a standardized pattern that can handle arbitrary calendars, even when we're dealing with standard calendars.


Do you guys use a custom date type, then?


No. Dates are dates. Everyone can agree when a specific day exists. It's all about grouping.

Dates have attributes that group them together. "Month" is an attribute that you're familiar with. "Fiscal Period" can take on many specific definitions but it is analogous to "Month".

Those two concepts share a lot of properties. They each collect a series of contiguous dates. Each is adjacent to a similar grouping that falls sequentially and the last date that exists in one is one day prior to the first day that exists in the next. Each falls within a larger category like "Year" or "Fiscal Year".

Year+Period forms a composite key for a period. We can also assign a monotonically increasing field that increments by one with each subsequent period. That field allows simple arithmetic to shift forward and backward. We typically call this attribute PeriodIndex or PeriodSequential. I'll abbreviate to PI here.

If you have a reference PI, you can always find the immediate prior period by subtracting one from the reference. We can assign these for any grain of time period. We typically see Week, Period, Quarter, Semester, Year.

This is the baseline of how we handle dates. There are plenty of utility fields we'll maintain for specific time-based needs, but it's all sugar on top of that.


One thing I learned about working with postgres and time is that the timezone is based on the timezone of the connecting sever and not actual sever. I can't tell you how long it took me to debug code due to my workstation being at cst but severs in est and then storing dates as utc. Bundle that with caculating upcoming birthdays within 15-30 days before and leap years.

Yeah I didn't like it one bit. Sorta reminds me when I had to develop a Grantt chart component in flex for a client, so many problems with dates.


Fantastic article as usual. One correction: the literal for UTC 00:00:00 00:00:00 is 'allballs', not 'allsballs' as mentioned. I know this because it made me giggle when I first discovered it, and it subsequently became an immature joke around the office for a day or two afterwards.


Doh! Will fix.


What a ballsup


Last time I checked, I couldn't store a datetime _with_ timezone. It was really strange that such a powerful database doesn't support storing full-ISO datetimes, like '2017-01-01T00:00:00Z'. Instead, it converted it to date-time-only instant, losing information of original timezone along the way. Sure, I could fetch it back using any timezone I want, but I really wanted to know the original timezone it was in.


You can do that with the 'TIMESTAMP WITH TIME ZONE' column type.

https://www.postgresql.org/docs/current/static/datatype-date...

(Edit: or not. See child comment)


Nope, that doesn't store the time zone, it just uses time zone information before flattening to UTC time.

https://stackoverflow.com/a/9576170/215168


When would you want use something else than UTC for business logic? Time zones (and their related nonsense) should be a view-layer concern.


Because there actually are times that are specified in terms of local time and are not fixed to a specific timezone. Take a birthday, for a trivial example: The span of time in UTC that corresponds to someone's birthday depends on their location at the time.


true that anniversaries are not fixed to timezone, but these are not timestamps, and generally not even dates. Just recurrences specified by month, day (or maybe time of day eg Armistice Day observed at 11:00am).

Better example: Typically an events schedule should specify destination timezone when registering an event, so that if regulations on local timezone change, scheduled event times remain valid.


For an example of how storing a UTC datetime for a future event can go wrong, see my comment:

https://news.ycombinator.com/item?id=12988092


Perhaps because there's not always a one to one relation between a time with zone and a unix time stamp.


That is correct behaviour. Time zone information is a presentation detail.


Not entirely. Thanks to daylight-savings, you need time zone information to properly calculate lengths of timespans, e.g. for daily recurrences


Yes, if you are creating appointments in a calendar program and want to have a daily meeting at 3pm and you are in a jurisdiction with DST, then you will need some TZ info so it knows when to wiggle the time by an hour.

But in this case, there's a lot of other information you want to store: first date in the series, repetition frequency, time of the appointment, location (e.g. "Europe/London" as opposed to "GMT" or "UTC+0" which would not be adequate). Basically, you're not so much storing a series of dates as storing a thunk for generating dates.


Would it be that much work to add a smallint field, that had the original UTC offset used for your time?


Timezones are more than just an offset.


True.

I'm sure a use case exists where you want to store a timezone, but I don't know what it is. I try and use UTC everywhere, and only worry about time zones when displaying.


Any future event (such as a meeting) can't be stored as UTC, because time zone rules may change between now and the event date, but the event still needs to happen 10am local time.


I don't think that's necessarily true. If you really care about that edge case, then you would need to ask whether the local time should change if the local time zone's UTC unpredictably changes. I don't think you can make a safe assumption either way.


How often do time zone rules change? That seems like a fairly rare event. If you're talking about daylight savings time, the date library you're using to convert local time to UTC should account for that.


This happens all the time, and sometimes even with just a few days heads-up. Here's a great example: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=457938


There's always changes happening. Just this year, Southern Chile changed its summer time zone offset, Haiti started observing DST and Mongolia stopped doing that.


Seriously?! Man, dealing with time can just never be simple...


Time, to me, is the canonical example of "things that people think should be easy, aren't". It has everything - complex, constantly changing "business" rules, exceptions generated essentially randomly by a shadowy cabal[1] nobody's ever heard of, "impossible" situations like the 11 days that never happened and other technically arbitrary calendar edits[2], multiple silly base conversions, really weird rules for picking certain dates[3], lots of opportunities for making fencepost errors, and lots of other things.

Which leads me to my rule with time programming: Never fail to use a solid library, unless you're unfortunate enough to be writing one.

This[4] is a great, necessary but not sufficient book if you have to do that.

[1] https://en.wikipedia.org/wiki/International_Earth_Rotation_a...

[2] For example, https://en.wikipedia.org/wiki/Swedish_calendar

[3] https://en.wikipedia.org/wiki/Easter#Computations

[4] https://www.amazon.com/Calendrical-Calculations-Nachum-Dersh...


Don't even mention how to calculate number of weeks in a year! It's often needed in real world programming for business reports, yet Joda-time library still doesn't support weeks not starting on Monday (like USA-ones) for that.


It doesnt have to be rules, time zones have relative and changing references against UTC, for example daylight saving time which is 2x per year in the US.

If you schedule a meeting for 4pm Tuesday next week, but DST happens Friday this week, you still want the meeting on 4pm Tuesday next week. The local timezone and local timestamp is necessary for that.


If the software converts to UTC it should be using the expected local timezone on the target date (4pm Tuesday), not the current timezone. So this will only be a problem if the TZ tables for the future are not correct (which happens, as noted in sibling comments, but is not as widespread as every DST event like your comment seems to imply).


> expected local timezone

I'm not sure what this means. Either way, to store an appointment for a certain user, you need the local timezone of that user and the local timestamp. UTC version of that timestamp is optional (maybe for easier calculations in the database), but you must have the local timestamp to convert correctly.

Local TZ => UTC is a formula that is not constant. You cannot store the output of this formula and expect to reverse and get back the original input precisely because of the changing formula.


>I'm not sure what this means.

The timezone that the user will experience on that date.

>Either way, to store an appointment for a certain user, you need the local timezone of that user and the local timestamp.

Correct. I did not disagree with this.

>Local TZ -> UTC is a formula that is not constant. You cannot store the output of this formula and expect to reverse and get back the original input precisely because of the changing formula.

Also correct. But as I said, your original comment implied this is a concern for all DST events which is not true. It is only a concern when the time was converted using some DST assumption and then that assumption is invalidated (eg the software decided to convert "4pm Tuesday local" to UTC assuming the timezone on Tuesday will be UTC+7, but the DST change happens to come early this year and in fact it's UTC+8 on Tuesday).

Regardless, I agree that taking this risk is not correct. Such a time should not be converted to UTC to begin with. Better to store the original local time.


Yes, it is trivial extra work and storage for a timezone + timestamp (along with optional UTC) to get 100% reliable calculations.

Timezones, especially globally, have enough updates being made that they are not something worth the risk just to save a few bytes.


That doesn't sound right. If you have a local timestamp with the date and the time and convert it to UTC, shouldn't the datetime library look up the rules for this timezone and convert it correctly? Isn't that the whole point of having timezones be more than offsets?


What doesn't sound right? You might be saying the same thing. To store this appointment correctly, you store the timezone of the appointment (let's say EST) and the timestamp within that timezone (4pm). So the final version is 16:00 EST. You can store a UTC version as well, but you need the local timestamp and timezone first.

If you only store the timezone with a UTC timestamp, when you convert back after DST shifts you'll end up with 3pm or 5pm (depending on the shift), not the expected 4pm for the actual meeting. UTC in this case is not the anchor, the local timestamp the user set for the meeting is.


No, I mean you make the appointment for December 3rd at 10:00am in Eastern Time (ET), and then you convert it to UTC to put into the database. Your local->UTC conversion library looks at the date and notices that this occurs during Daylight Savings Time for ET, and uses the correct offset when converting. Unless the rules about DST change between now and that meeting this is completely correct, you don't need to store the local timestamp at all.


That is not reliable. Storing in UTC is optional, and can be used for easier comparisons, but you must store the local timezone and timestamp to be 100% reliable.

The local TZ => UTC formula may change for any reason, whether it's daylight saving time or any other random situation. It might be relatively stable in some locations (although it can change even within a global timezone like EST based on location) but there are constant updates being made. You can look at the IANA and Microsoft timezone database updates yourself. Why create risk when not necessary?


That was my original point. The rules about DST (or even just the base UTC offset) may change between now and the meeting unpredictably according to the whims of the local government, and this happens all the time.


Oh yeah I totally get that, but my point was the only problem is from rule changes, not from regularly scheduled DST events.


Learned something new today! Surprised that never crossed my mind, even though I experience daylight saving every year.


Not really, that still breaks when you have participants from multiple time zones.


It doesn't "break"; yes, you have to change the local time for the other time zones, but the point is that some times you have to use a specific reference time zone, so you got to save it.


No, appointments for participants from multiple time zones are not specified in implicit local time, but rather in a some explicit timezone, often UTC, so you have to store that timezone instead of the implicit local one.


Aren't timezones an offset of UTC by definition? I thought during daylight savings time a country is temporarily changing their timezone. Isn't that why we have EST and EDT: they're two different timezones?


No, time zones are a specific geo region that follows a certain time standard. The standard has a relative reference to UTC but that reference may change, either regularly or randomly.

Eastern Time Zone is a single time zone, that has different offsets depending on season, formally referred to as EDT and EST to make it easier to identify as daylight or standard references to UTC.

Even regions within the same time zone don't follow the standard exactly, so Panama does not observe daylight saving time while New York does. This is why we have even more granular settings used for calendars and dates.


Doesn't matter as far as the DB is concerned.

You either just store UTC and co, or you store it just as an datetime + offset.

100% of apps in production don't handle most of the timezone intricacies anyway and the sky hasn't fallen (heck, the sky hasn't even fallen for Y2K).


Timezones have more information and are a mini-database in themselves, which is why most database include the information and have timezone capable types. Offsets as simple numbers are not usable in any real calculation.

100% of apps? You must not know what timezone intricacies are then or just how much effort is spent to make sure time itself is properly handled, especially in any major application that has global users.


>You must not know what timezone intricacies are then or just how much effort is spent to make sure time itself is properly handled, especially in any major application that has global users.

You'd be surprised.


It does matter for preserving original timezone. An int is a data loss.


While I agree with the other replies, using a smallint would assume all timezones are offset in hourly increments, which isn't the case.


The week example is a tad misleading, 2017-01-01 is a Sunday, which in some/most? countries is the first day of the week.

If the date were 2016-01-01 and you compared it with what week Postgres thinks it is, you'd get:

  SELECT date_part('week', '2016-01-01'::date);
   date_part 
  -----------
          53
  (1 row)
This is because 2016-01-01 is still the 53rd week of 2015.

Edit: Actually, 2017-01-01 is week 52 according to Postgres, probably because it uses Monday as the first day of the week.


Probably because it's using ISO-8601 week numbers. https://en.wikipedia.org/wiki/ISO_week_date


> Sunday, which in some/most? countries is the first day of the week.

Just like imperial system, only a couple of weirdos do that.


Postgres uses the ISO definition of week for "week", which starts on Monday. For "dow", it uses the American week definition.


isodow for the sane definition ;)


This would have been real useful to me about a week ago as I was writing several of these types of queries!

On the debate of "timestamp vs timestamptz" I reached the opposite conclusion of the author: I've got Amazon RDS instances set to UTC and my timestamps are stored as UTC times with no timezone awareness. Instead, I add the timezone while querying. I think this is better because I never have to remember anything about server settings!

I discovered that the `AT TIME ZONE` clause has two meanings, so I sometimes have to use it twice. In this example which selects all records created this month:

    ...WHERE create_date  AT TIME ZONE 'UTC' AT TIME ZONE 'America/New_York' > date_trunc('month',current_date)
the first occurrence of `AT TIME ZONE` tells postgres that the timestamp is in UTC (which it is) and the second occurrence subtracts four or five hours (depending on daylight savings time) to show New York time. If I only had the second such clause it would subtract that many hours... it would think I was giving it a New York timestamp and I wanted to see the UTC time.


Actually using generate_series makes little sense. Why should one repeatedly calculate data that will never change.

I have this table:

CREATE TABLE all_dates ( date_stamp date NOT NULL, is_month_end boolean, is_year_end boolean, is_week_end boolean, is_quarter_end boolean, CONSTRAINT all_dates_pkey PRIMARY KEY (date_stamp) )

filled with data from 1st Jan 1980 to 31st Dec 2050, which is the range my application needs.

It's a mere 22k rows and has a whole host of uses.


timestamptz doesn't embed timezone, it stores it as utc without any timezone information.

timestamp does the same - stores value without timezone information.

the difference is with writing/reading those values where timestamptz behaves as you'd expect and timestamp ignores timezone information.

timestamptz - gives you the thing that exists: unique point in time, ie. if person A in australia and person B in europe hits the red button at the same time - timestamptz will have the same value, regardless of the fact that those two timestamp strings had different representations.

timestamp - gives you this local view of time: when person A in australia wakes up 6am to work and person B in europe wakes up at 6am to work - they will hit the snooze button and it will create same value in the database - even though those events happened hours apart.

in both cases you'd have to store timezone in separate column if you want to extract information on which timezone the timestamp was generated in. let me repeat - both cases loose information on timezone. they just do it in different way - timestamp by ignoring it completely and timestamptz by mapping it correctly to unix epoch.


Weird, the example in the post (after changing table/field names for my database)

  with weeks as (
    select week as week
    from generate_series('2017-01-01'::date, now()::date, '1 week'::interval) weeks
  ),

  SELECT weeks.week,
    count(*)
  FROM weeks,
    test_results
  WHERE
    test_results.date_created > weeks.week
  AND
    test_results.date_created <= weeks.week - '1 week'::interval
Throws an error for me...

  ERROR:  syntax error at or near "SELECT"
  LINE 5: SELECT weeks.week,
          ^


yeah, the comma after the "with" block shouldn't be there.

i.e.,

    ... weeks
    )
    SELECT weeks.week


makes sense. After removing it...

  with weeks as (
    select week as week
    from generate_series('2017-01-01'::date, now()::date, '1 week'::interval) weeks
  )

  SELECT weeks.week,
    count(*)
  FROM weeks,
    test_results
  WHERE
    test_results.date_created > weeks.week
  AND
    test_results.date_created <= weeks.week - '1 week'::interval
it throws...

  ERROR:  column "week" does not exist
  LINE 2:     select week as week
                     ^
I would move this to the post's own "replies" section, but it doesn't have one.


This should work. I'm sorry, I conformed it to my own style (lowercase and different indentation).

    with weeks as (
        select week
        from generate_series(
            '2017-01-01'::date,
            now()::date,
            '1 week'::interval
        ) week
    )
    select weeks.week, count(1)
    from weeks, test_results
    where
        test_results.date_created > weeks.week and
        test_results.date_created <= (weeks.week - '1 week'::interval)
    group by week
    ;


Remove the comma before the SELECT?


I understand by the discussion that if you want to have a field with only four qualitative categories (0, 1, 2, 3 with zero meaning "none" and 3 being "very much") you could use a numrange or int4range for example instead the standard integer type. Interesting. Apart of being much more restrictive in the allowed input, are other advantages (less memory?) or cons (possible portability problems?) that we should be aware of?

Footnote:

> Here’s just a few examples of things you could do with interals:

The author of the article could want to fix the 'interals' typo in the text.


Handling with birthdays and Ages, my Fav: select age('1971-01-01'::date); select age('2015-01-01'::date, '1971-01-01'::date );


date_trunc() is one way but to_char is even better as you can get the resulting output to something nicer. Doing:

  SELECT DATE_TRUNC('week', CURRENT_TIMESTAMP);
gives:

  2017-06-05 00:00:00+00
vs:

  SELECT TO_CHAR(CURRENT_TIMESTAMP, 'YYYY-WW"wk"');
gives:

  2017-23wk


Note that there's the ISO standard for weeks which uses slightly different abbreviations:

    SELECT to_char(now(), 'IYYY-"W"IW');
The difference is when the first week of the year starts. Compare yours to the ISO 8601 format for January 1 this year:

    $ SELECT to_char('2017-01-01'::date, 'IYYY-"W"IW');
    > 2016-W52

    $ SELECT to_char('2017-01-01'::date, 'YYYY-"W"WW');
    > 2017-W01


It's not the same: date_trunc returns timestamp, to_char returns a string.


True but in the use case of OPs example. In the article OP says "So if we wanted to find the count of users that signed up per week:" which "2017-06-05 00:00:00+00" isn't a week, it's a date (with a time stamp as well which isn't pertinent) which happens to be the beginning of a week. Using TO_CHAR() with a format string makes it more legible and more recognizable.


Have to mention https://github.com/activityclub/pointspaced in a hn post about time. I use psd for so many queries now vs sql.


Small question / nitpick,

WHERE created_at >= now() - '1 week'::interval

would mean in the last 7 days right? not last week?

Did some work on this recently in mysql and had to resort to calculating this using strtotime('last week');


One week is always 7 days. But one month (and year) is not always the same length. If you add or subtract n months from a timestamp or date at the beginning or the end of the month, it returns the beginning or end of the month n months away. Here's an example showcasing this using the fact that 2016 was a leap year:

    $ select '2016-03-31'::timestamp - '1 month'::interval;
    > 2016-02-29 00:00:00

    $ select '2016-03-31'::timestamp + '11 month'::interval;
    > 2017-02-28 00:00:00

    $ select '2016-02-29'::timestamp + '1 year'::interval;
    > 2017-02-28 00:00:00


Correct, it would give the results from this exact moment in time to that same timestamp 7 days ago. Were you thinking it might give you up to say the start of the last week or something?


Reading the end of the sentence "within the past week:" just above. However I would be interested to know if the "last week" date range is easily doable in postgres :)


Will try to clarify it a bit further. Offhand you could probably do something like the following to find if it fell within some week range:

created_at BETWEEN (date_trunc('week', now())) AND (date_trunc('week', now() - '1 week'::interval))


the generate_series example is a mess

For those who want to implement this in Python, I've written a gist: https://gist.github.com/Dowwie/bec0a29bcd37eea41cde8d5188626...


Any recommendations for similar "best practices" articles/guides for MySQL?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: