Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> If you are doing a data migration and failed to account for some unexpected data, now you have people on different schema versions until you figure it out.

That shouldn't be a big issue. Any service large/complex enough to care does the schema upgrades in phases, so it's 1. Make code future compatible. 2. Migrate data. 3. Remove old schema support.

So typically it should be safe to run between steps 1 and 2 for a long time. (Modulo new bugs of course) As an ops-y person I'm comfortable with the system running mid-migration as long as the steps as described are used.



> That shouldn't be a big issue. Any service large/complex enough to care does the schema upgrades in phases, so it's 1. Make code future compatible. 2. Migrate data. 3. Remove old schema support.

Exactly this, schema migrations should be an append, deprecate, drop operation over time.


I wish there were ways to enforce this on the db so you never accidentally grabbed a table lock during these operations.

definitely have shot myself in the foot with postgres on this


> I wish there were ways to enforce this on the db so you never accidentally grabbed a table lock during these operations.

You can use a linter for PostgreSQL migrations https://squawkhq.com/


And squitch is a wonderful Perl tool for this as well


This is awesome thank you!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: