Hacker News new | past | comments | ask | show | jobs | submit login

The last time I looked at using ZFS for my local NAS, the documentation stated that it required at least 1GB of RAM for every 1TB of storage. Are they disabling certain features to get around this requirement? I'd love to use ZFS locally but the RAM requirement is a big problem.



You're probably thinking of dedup, a ZFS feature that is off by default.

Dedup is only useful in certain specialised situations, so leave it off unless you have evidence it'll result in substantial space savings (and you're willing to deal with the memory footprint).

In my experience, ZFS's RAM usage is reasonable, regardless of disk size, as long as you stay away from dedup. Most of ZFS's memory usage is the ARC (the disk cache), which is what any OS more advanced than DOS does.


To put some numbers on that, FreeBSD recommend 1Gb as a minimum, and you can tune to run stably in less. Dedup is insanely greedy - FreeBSD handbook reckons on 5GB / TiB in practice, rather than the 2GB often stated. 99 out 100 times you're better off using compression anyway.

All you're really losing is the caching, which may not be so important for what is going to be mostly writes in this backup app.

If you want to add some cache headroom to a Pi, you could add a USB stick to the pool as cache. I just threw RAM at my homebuilt NAS, so I can't say how helpful this is. I imagine it's useless for sequential and pretty good for small random access reads, directories etc.


If the L2ARC device dies, do I lose my pool? I've got an SSD in my NAS but I'd be using a bog standard flash drive on a Pi.


The L2ARC is just a read cache that works alongside the regular ARC stored in RAM. If the L2ARC device malfunctions then you might see a performance hit, but your pool will remain intact. All L2ARC reads are checked for integrity, so invalid data will be rejected and read from the pool directly instead.

ZFS can even tolerate the loss of a SLOG device as long as A) the SLOG is not currently being read to rebuild after a crash and B) all writes that occurred before the SLOG was lost have been fully flushed to the pool disks.


I've seen L2ARC (and SLOGs too) die before, and ZFS kept on working.

Just expect a hit on performance if your workload leans heavily on the L2ARC. Also, in my experience, L2ARC is a lot less useful than a lot of people think; doing some workload characterisation is an interesting and informative exercise.

As an aside: an SLOG is usually useful. Unless your workload is effectively read-only, a SLOG can make a pool perform close to SSD levels for most workloads on the cheap. Just be careful to get a quality SSD for the SLOG.


Bah, every time I think I've got a handle on ZFS I learn something new that tells me I haven't configured my system properly. I've got 2 mirrored vdevs and a 120GB SSD as L2ARC - I'm not sure I'd get much more of a benefit from SLOG, although it might speed up some jails. CIFS transfers are going at 800 - 900 megabits, and moving to 10GBe isn't feasible yet ;).

OT: Anyone seen behaviour like this? https://lists.freebsd.org/pipermail/freebsd-net/2016-Februar...


Like the other reply says, not necessary unless you use deduplication. Mine has 60TB/16GB with no issues; I reckon it would be fine with half that.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: