You just copy the WAL log to another server and replay it. It takes a day to setup and test. Once that is setup you have two options async replication (which means you'll lose about 100ms of data in event of a crash) or you can use sync replication which means the transaction doesn't commit until the WAL log is replicated on the other server. (that adds latency but doesn't really affect throughput)
I'm not exactly sure how the failover system works in Postgres, the last time I setup replication on postgres it would only copy the WAL log after it was fully written, but I know they have a much more fine grained system now.
If you use SQL Server you can add a 3rd monitoring server and your connections failover to the new master pretty much automatically as long as you add the 2nd server to your connection string. Using the setup with a 3rd server can create some very strange failure modes though.
Another possibility on the AWS cloud is to put the logs on S3, Amazon S3 has High Durability. Heroku has published a tool for doing it: https://github.com/heroku/WAL-E , which they use to manage their database product.
If this was for Postgres refer this :
http://wiki.postgresql.org/wiki/Binary_Replication_Tutorial
See the section "Starting Replication with only a Quick Master Restart", I tested this with a DB <10 GB. The setup time for replication was in the order of minutes. On a related note, it would be awesome if someone could point out real-world experiences for Postgres 9.1 synchronous replication.
I'm not exactly sure how the failover system works in Postgres, the last time I setup replication on postgres it would only copy the WAL log after it was fully written, but I know they have a much more fine grained system now.
If you use SQL Server you can add a 3rd monitoring server and your connections failover to the new master pretty much automatically as long as you add the 2nd server to your connection string. Using the setup with a 3rd server can create some very strange failure modes though.