Be careful here. Keep in mind that if any single vdev fails, the entire pool fails with it. There is no fault tolerance at the pool level, only at the individual vdev level!"
I've been architecting and implementing highly fault tolerant, large scale storage solutions with ZFS since one week after it came out at end of June 2006, but thank you, I'll be extra sure to keep that in mind.
Please do not assume that everyone here comes from GNU/Linux and knows next to nothing or half-true anecdotes about (Open)ZFS. For example, I learned what and how in ZFS directly from the ZFS development team at Sun Microsystems: Roch Burbonnais, Adam Leventhal, Eric Shrock, and Jeff Bonwick.
I didn't say "increase disk utilization with Swift", I said ZFS does not use the whole disk when you add a new large disk unless you replace the entire vdev worth of disks. Swift treats each disk individually and uses weighting to deal with heterogeneous disk sizes which is much cleaner.
Replacement of entire disks to grow the pool is simply a different way of adding capacity, because ZFS is based on different paradigms than what one is used to with run of the mill implementations; however, in practice, it's a much simpler and therefore much more reliable way to extend capacity. The units are simply entire larger disks.
Swift is also how many orders of magnitude more complex?
There is never any fear of losing a pool when losing a vdev, as ZFS will let one simulate the entire thing by letting one use files as disks: