Hacker News new | past | comments | ask | show | jobs | submit login

Here's the way it works for, say, Postgresql:

- you rsync or zfs send the database files from machine A to machine B. You would like the database to be off during this process, which will make it consistent. The big advantage of ZFS is that you can stop PG, snapshot the filesystem, and turn PG on again immediately, then send the snapshot. Machine B is now a cold backup replica of A. Your loss potential is limited to the time between backups.

- after the previous step is completed, you arrange for machine A to send WAL files to machine B. It's well documented. You could use rsync or scp here. It happens automatically and frequently. Machine B is now a warm replica of A -- if you need to turn it on in an emergency, you will only have lost one WAL file's worth of changes.

- after that step is completed, you give machine B credentials to login to A for live replication. Machine B is now a live, very slightly delayed read-only replica of A. Anything that A processes will be updated on B as soon as it is received.

You can go further and arrange to load balance requests between read-only replicas, while sending the write requests to the primary; you can look at Citus (now open source) to add multi-primary clustering.




This isn't really a backup, it's redundancy which is good thing but not the same as a backup solution. You can't get out of a drop table production type event this way.


The previous commenter was probably unaware of the various way to backup recent postgresql release.

For what you describe a "point in time recovery" backup would probably be the more adequate flavor https://www.postgresql.org/docs/current/continuous-archiving...

It was first release around 2010 and gained robustness with every release hence not everyone is aware of it.

The for instance I don't think it's really required anymore to shutdown the database to do the initial sync if you use the proper tooling (for instance pg_basebackup if I remember correctly)


Going back 20 years with Oracle DB it was common to use "triple mirror" on storage to make a block level copy of the database. Lock the DB for changes, flush the logs, break the mirror. You now have a point in time copy of the database that could be mounted by a second system to create a tape backup, or as a recovery point to restore.

It was the way to do it, and very easy to manage.


If you add a delay of say 30 minutes for one of your replicas, you have another option in a "drop table" type event.


If you stop at the first bullet point then you have a backup solution.


It doesn't solve the problem that sending that snapshot to a backup location takes a long time.


No, it doesn't.

It takes exactly the time that it takes, bottlenecked by:

* your disk read speed on one end and write speed on the other, modulo compression

* the network bandwidth between points A and B, modulo compression

* the size of the data you are sending

So, if you have a 10GB database that you send over a 10Gb/s link to the other side of the datacenter, it might be as little as 10 seconds. If you have a 10TB database that you send over a nominally 1GB/s link but actually there's a lot of congestion from other users, to a datacenter on the other side of the world, that might take a hundred hours or so.

rsync can help a lot here, or the ZFS differential snapshot send.


Unless your storage is already mirrored off-site. Ex: EMC srdf


so say the disk fails on your main DB. or for some reason a customer needs data from 6 months ago, which is no longer in your local snapshots. In order to restore the data, you have to transfer the data for the full database back over.

With multiple databases, you only have to transfer a single database, not all of your data.


pg_dump has an option to output one-table-per-file. You can use this for selective restores later.


Precisely so.


Do you even have to stop Postgres if using ZFS snapshots? ZFS snapshots are atomic, so I’d expect that to be fine. If it wasn’t fine, that would also mean Postgres couldn’t handle power failure or other sudden failures.


You have choices.

* shut down PG. Gain perfect consistency.

* use pg_dump. Perfect consistency at the cost of a longer transaction. Gain portability for major version upgrades.

* Don't shut down PG: here's what the manual says:

However, a backup created in this way saves the database files in a state as if the database server was not properly shut down; therefore, when you start the database server on the backed-up data, it will think the previous server instance crashed and will replay the WAL log. This is not a problem; just be aware of it (and be sure to include the WAL files in your backup). You can perform a CHECKPOINT before taking the snapshot to reduce recovery time.

* Midway: use SELECT pg_start_backup('label', false, false); and SELECT * FROM pg_stop_backup(false, true); to generate WAL files while you are running the backup, and add those to your backup.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: