Hacker News new | past | comments | ask | show | jobs | submit login

I've used SQLite with HashBackup for 13 years and it's been awesome. A lot of the stories I read about "data corruption" with SQLite are, IMO, really about faulty applications that do database commits with the data in an inconsistent state from the application point of view. I've done it myself - it's an easy sin.

I've migrated database versions 35 times over the last 13 years, ie, every individual HB database has been migrated 35 times. You don't always need to make a new table, do a copy, and switch over. In the latest migration, I added new columns, initialized them, dropped columns, etc. without doing a db copy.

For this migration I wanted to switch to strict tables, where typing is strict. I could have done this by just altering the schema (it's just a bunch of text in the SQLite db) and then using SQL to make sure existing columns had the right data (using CAST). But instead, I created a general framework to allow me to migrate data from one schema to another, mainly so I could reorder columns if I wanted. That can't be done with ALTER statements, so I did end up doing a complete copy, but I've done many migrations without a copy.

I found this paper interesting on "zero downtime migrations".

https://postgres.ai/blog/20210923-zero-downtime-postgres-sch...

After reading it, the bottom line is that changes happen within transactions (true for SQLite too), and the key to zero downtime migrations is to use short transactions, use timeouts, and use retries on all database operations, including the migration commands. You can do all these with SQLite.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: