* You know exactly what your app needs to do, up-front
But isn't this true of any database? Generally, adding a new index to a 50 million row table is a pain in most RDBs. As is adding a column, or in some cases, even deleting an index. These operations usually incur downtime, or some tricky table duplication with migration process that is rather compute + I/O intensive... and risky.
50M rows is really not that much, I’d guesstimate an index creation to take single-digit minutes.
None of these operations I’d expect to cause downtime, or require table duplication or to be risky
Edit: to be fair, you’re right there’s footguns. Make sure index creation is concurrently, and be careful with column default that might take a lock. It’s easy to do the right thing and have no problem, but also to do the wrong thing and have downtime
Newer versions of Postgres also support dropping indexes concurrently. I recommend using the concurrently option when dropping unused or unneeded indexes on any table with active writes and reads.
https://www.postgresql.org/docs/current/sql-dropindex.html
* You know exactly what your app needs to do, up-front
But isn't this true of any database? Generally, adding a new index to a 50 million row table is a pain in most RDBs. As is adding a column, or in some cases, even deleting an index. These operations usually incur downtime, or some tricky table duplication with migration process that is rather compute + I/O intensive... and risky.