Microsoft found that compressed pages were always faster, because added by de-/compression was less than the latency to disk (given a sufficiently fast compression algorithm). As a bonus, it's also faster to read and write compressed pages to disk (if that absolutely has to happen). Zswap is therefore enabled by default on Windows.
I configure my own kernel on Arch and Zswap is enabled by default there, too.
Microsoft's compressed pages implementation seems to work far better than zswap on Linux.
I can't quite see why - perhaps the logic to decide which pages to compress is different, or there is too much code in the swap subsystem that slows down the compression/decompression process...
Pages as in the 8K (for example) data structure used to store portions of files on disk, or pages as in text files? I'm assuming the former but I am not very good with file system internals
I configure my own kernel on Arch and Zswap is enabled by default there, too.