Sounds like a worthwhile improvement to rsync, but I wonder why this setup is preferred to duplicity [1] or rdiff-backup [2], which both also use rsync (librsync) to perform incremental backups. I've had good experiences with duplicity in particular.
rdiff-bachup will give you copies of the changed files (so 1.2TiB for the database file mentioned in the article) every time you run a backup for each old version you keep. It will not transfer that much data over the network (since it uses the rsync algorithm), but it stores that much on disk.
On the other hand, if you use a filesystem with copy-on-write snapshots and in-place modification of the changed files, you will only use as much disk space as there are changed blocks in the file for each version you keep. (Of course you have no additional redundancy if you keep n older version, as each bit of data is only stored once physically. But you only ever store one version of an unchanged file in the rdiff-backup scenario either, so you should alternate between different backup disks anyway.)
This isn't entirely correct; rdiff-backup will give you a full copy of the latest version of the file as well as a set of binary diffs that can be applied in sequence to roll it back to an earlier version. rdiff-backup will actually end up being a little more space efficient for each incremental change since its diffs don't need to store entire filesystem blocks.
[1] http://duplicity.nongnu.org/
[2] http://www.nongnu.org/rdiff-backup/