True and true. But a decent checker like e2fsck should be paranoid & untrusting of on-disk data structures, and able to retrieve at least some data from a badly-mangled filesystem. It is a fallback tool but a useful one.
I don't quite understand the difference between that and how ZFS is untrusting by holding multiple levels of checksum and having copy-on-write and scrubbing etc.
Is there a particular type of data corruption that fsck would recover that ZFS would not?
Suppose there is some bug in ZFS or the OS. Checksums will fail (problem is detected), but conceivably, there could be a type of bug that an fsck tool could fix, without having to roll back to a previous version. Another potential advantage is that a fix by fsck could be much quicker than restoring a complete backup.
OK. Suppose you are running a buggy ZFS, and it causes problems. Why would you trust a filesystem repair tool written by the people who wrote your buggy filesystem to fix those for you without causing more problems? If anything, I would trust the filesystem more, as it (one would hope) sees way more usage.
If your OS or disks are buggy, you are hosed, anyway, as that checking tool would run on the same OS and hardware.
You should look at ZFS as having a built-in fsck that is automatically invoked when needed.
ZFS should be able to restore a failure in a single block quickly. With fsck you have to unmount the drive to even check. If you have a bad write operation (in ZFS) it should be detected even before the block pointer is set.
I'm not too familiar with exactly how fsck works, but it seems to mainly stitch bad metadata back together so there's still no guarantee that your data is perfectly restored.
This would be especially true if a bad FS write operation caused the data to be corrupt.
If the data is really corrupt the FS doesn't have any inherent way of knowing what the correct data should be.