Hacker News new | past | comments | ask | show | jobs | submit login

A way to have your cake and eat it too - check out Primocache. It's pretty inexpensive disk caching software (especially for Windows Server which is where I really leverage it!).

Pair it with an Optane for L2 cache and it will speed up normal SSD use too ;)




How does it compare to default linux caching algorithm?


The default one that caches disk data in memory? It solves a very different problem by caching data from slower disks onto faster disks. It can accelerate reads and it can also act as a writeback cache.


It's most comparable to lvmcache or bcache. As in, Primocache does three actually useful things:

  - It can pre-populate an in-memory disk cache. Not massively useful, but depending on your workload and uptime it might save some time. Nothing I know of does this on Linux.
  - It can act as a level 2 block cache, i.e. caching frequently accessed blocks from slow (HDD) storage on fast (SSD) storage. This is massively useful, especially for e.g. a Steam library.
  - It can force extensive write buffering, using either main memory or SSD.
And yes, it can also act as a block buffer in main memory, but I don't find that helpful. Windows' default file cache does a good enough job, and Linux' VFS cache works even better. (Though ZFS' ARC works better yet...)

Its write bufferin increases write speeds massively, inasmuch as it delays actually writing out data. Obviously, doing so with main memory means a crash will lose data; what's not so obvious (but amply explained in the UI) is that, because it's a block-based write buffer, it can also corrupt your filesystem. Primocache does not obey write barriers in this mode.

What's even less obvious, and where it differs from lvmcache / bcache, is that this still happens if you use only an L2 write buffer, not a memory-based one. The data will still be there, on the SSD, but Primocache apparently doesn't restore the write-buffer state on bootup. Possibly it's not even journalled correctly; I don't know, I just got the impression from forums that fixing this would be difficult.

So, overall, bcache / lvmcache / ZFS* are all massively superior. Primocache is your only option on Windows, however, unless you'd like to setup iSCSI or something. I've considered that, but I'd need at least a 10G link to get worthwhile performance.

*: ZFS supports cache-on-SSD, and unlike a year ago that's persisted across reboots, but it doesn't support write-buffer-on-ssd. Except in the form of the SIL, which is of dubious usefulness; that buffer is only used for synchronous writes.

However, ZFS is a transaction-oriented filesystem. It never reorders writes between transaction groups, which means that if you run it with sync=disabled -- which disables fsync & related calls -- it's still impossible to get locally visible corruption. The state of your filesystem at bootup will be some* valid POSIX state, as if the machine was suddenly but cleanly shut down. You still need to tweak the filesystem parameters; by default transaction groups end after 5 seconds, which is unlikely to be optimal.

Alternately, you can run it on top of lvmcache or bcache.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: