Hacker News new | past | comments | ask | show | jobs | submit login

Right, but the difference is that this is done on the backend with no app awareness. Doing it in the app ties the app logic to the database schema. Using pgroll would allow the database and app to get out of sync.



Allowing the database and the application to be out of sync (to +1/-1 versions) is really the point of pgroll though.

pgroll presents two versions of the database schema, to be used by the current and vNext versions of the app while syncing data between the two.

An old version of the app can continue to access the old version of the schema until such time as all instances of the application are gracefully shut down. At the same time the new versions of the app can be deployed and run against the new schema.


But then how do avoid errors introduced due to the lack of app awareness? For example the old app can keep writing data with the old schema, whereas the new version makes assumptions based on the new schema, which could be broken by the new data in the old schema that don't follow the same assumptions.

Seems like a dangerous way to introduce very hard to find data inconsistencies.


> For example the old app can keep writing data with the old schema, whereas the new version makes assumptions based on the new schema

To do safe migrations it's a good idea to avoid rolling out versions that start making assumptions about the new schema until the migration is done and dusted.

This does of course start to create weird territory where migrations cease to become trivially "stackable" if you want to retain the whole zero-downtime thing, but this is true for a lot of "graceful update" patterns in computing (in fact I've not seen it solved properly before).


> To do safe migrations it's a good idea to avoid rolling out versions that start making assumptions about the new schema until the migration is done and dusted.

Or make the app do the migration as part of its normal operation, so you have total control over it. :)

I think we're saying the same thing, I'm just suggesting it's safer to do it in the app itself, because then app changes and database changes are tied together instead of divorced from each other.

If you only have one or two people working on the code that interacts with the table that is being changed, both methods are pretty much the same. But if you have a large team or want to be able to rollback better, then doing it all in the app code feels like the better way to go for me.


For migrations altering columns (for instance adding a constraint), data gets upgraded/downgraded between old and new versions trough the up/down functions. These are defined by the migration. They work like this:

- For rows inserted/updated through the old version of the schema, the up function gets executed, copying the resulting value into the new column - For rows inserted/updated through the new version, the down function gets executed, copying the resulting value into the old column

For instance, you can test that up function works before releasing the new app relying on it, just by checking that the values present in the new schema are correct.


What happens if you write into the old table with data that violates the constraint the new schema adds? Does the up function fail?


An example would make this more concrete.

This migration[0] adds a CHECK constraint to a column.

When the migration is started, a new column with the constraint is created and values from the old column are backfilled using the 'up' SQL from the migration. The 'up' SQL rewrites values that don't meet the constraint so that they do.

The same 'up' SQL is used 'on the fly' as data is written to the old schema by applications - the 'up' SQL is used (as part of a trigger) to copy data into the new column, rewriting as necessary to ensure the constraint on the new column is met.

As the sibling comment makes clear, it is currently the migration author's responsibility to ensure that the 'up' SQL really does rewrite values so that they meet the constraint.

[0] - https://github.com/xataio/pgroll/blob/main/examples/22_add_c...


If I may be so free, I read your docs quite carefully last night and it wasn’t immediately clear to me that “up” and “down” are automagically run to keep both the old and the new data in sync, while a migration is pending. To my reading this is a spectacular feature because it solves the “the only way is forward” problem (where you can’t actually ever roll back a migration or risk losing / making inconsistent some recent data) for many kinds of migrations.

May I suggest spelling this out more explicitly in the docs? As far as I can tell it’s only mentioned by example halfway down the tutorial. To me it seems like one of pgroll’s true killer features, worth more coverage :-)


Yes, this is the case as of today. It's important to get the up/down functions right.

The good thing is that this will probably be detected during the start phase, as data would fail to be backfilled.

We are thinking of ideas to improve this in the longer term, for example: * Having a dry-run mode where you check that the data present in the DB will stay valid after applying the up function * Optionally allow for this to happen, but "quarantine" the affected rows so they are no longer available in the new schema.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: