> I don't know if this exists or not, but I'd like to try something like a fuse filesystem which can transparently copy a file to a fast scratch SSD when it is first accessed.
You may be interested in checking out bcache[1] or bcachefs[2].
But L2ARC only helps read speed. The idea with dm-writecache is to improve write speed.
I started thinking about this when considering using a SAN for the disks, so that write speed was limited by the 10GbE network I had. A local NVMe could then absorb write bursts, maintaining performance.
That said, it's not something I'd want to use in production that's for sure.
There was some work being done on writeback caching for ZFS[1], sadly it seems to have remained closed-source.
That's what SLOG is for if the writes are synchronous or if you have many small files/ want to optimize the metadata speed look at the metadata special device, which can also store small files of configurable size.
ZFS of course has its limits too. But in my experience I feel much more confident (re)configuring it. You can tune the real world performance well enough especially if you can utilize some of the advanced features of ZFS like snapshots/ bookmarks + zfs-send/recv for backups. Because with LVM/ XFS you can certainly hack something together which will work pretty reliably too but with ZFS it's all integrated and well tested (because it is a common use case).
As I mentioned in my other[1] post, the SLOG isn't really a write-back cache. My workloads are mostly async, so SLOG wouldn't help unless I force sync=always which isn't great either.
I love ZFS overall, it's been rock solid for me in the almost 15 years I've used it. This is just that one area where I feel could do with some improvements.
You may be interested in checking out bcache[1] or bcachefs[2].
[1] https://www.kernel.org/doc/html/latest/admin-guide/bcache.ht...
[2] https://bcachefs.org/