Hacker News new | past | comments | ask | show | jobs | submit login

Sqlite isn't necessarily easier to backup than a filesystem. I've got a fairly large (~60GB) sqlite that I'm somewhat eager to get off of. If I'd stuck with pure filesystem then backing up only the changeset would be trivial, but with sqlite I have to store a new copy of the database.

I've tried various solutions like litestream and even xdelta3 (which generates patches in excess of the actual changeset size), but I haven't found a solution I'm confident in other than backing up complete snapshots.




You might like the new sqlite3_rsync command https://www.sqlite.org/rsync.html


Yeah that looks ideal for this exact problem, because it lets you stream a snapshot backup of a SQLite over SSH without needing to first create a duplicate copy using .backup or vacuum. My notes here: https://til.simonwillison.net/sqlite/compile-sqlite3-rsync


Maybe that tool just doesn't fit my use-case, but I'm not sure how you'd use it to do incremental backups? I store all of my backups in S3 Glacier for the cheap storage, so there's nothing for me to rsync onto.

I can see how you'd use it for replication though.


If you want incremental backups to S3 I recommend Litestream.


What do you do about compaction?


zstd, though that only shaves off a few GB total.


Oh, I mean like with the vacuum command. As the databases get larger it can become unwieldy.


I just tried it, it only recovered a few MB


Aha, so not much need. I've always avoided SQLite for larger databases due to the extra space needed to allow compaction, maybe not a real problem in most applications though.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: