I do a lot of devops-work for different companies and "agencies", and i have seen a wide range of solutions so far. Unfortunately most of the existing backup solutions were really bad an i realized how less they even cared about it.
So HN: how, what, how often, where and when do YOU backup your servers?
examples:
configfiles, everything at /etc/ is under versioncontrol with git, everytime i make a change i commit and push it to a private server (offside) where every server has its own repo.
database, postgresql, every night at 02:00 a full backup with pgdump, triggered by cron, homebrew backupscript in bash, dump gets scp'ed to another host in a different datacenter, old backups are deleted manually when the disc gets full soon.
After running into scaling issues (esp. when needing to find a specific file that I didn't have a date or server name), I wrote (and open-sourced) Snebu. This works effectively like rsync/snapshots, but it stores the file metadata in an sqlite database (so there is very little setup complexity / maintanence), and individual files are stored compressed in a vault directory, named by the SHA1 checksum of the file. This gets cross server file-level deduplication for free. This is somewhat similar to how git stores files.
Just recently posted a 1.0 release, although I need to improve the documentation a bit and post a few more front-end scripts (and finish the in-app help pages). But would love to get some feedback on it.
The site is www.snebu.com if you are interested (this points to the Github-hosted page, the project page is github.com/derekp7/snebu).
For data retention, I keep daily backups for a couple weeks, weekly backups for 6 weeks, monthlys for a year, and yearlys for as long as I have space.