Hacker News new | past | comments | ask | show | jobs | submit login

Yeah agreed, very closely related - even more so on ZFS where the compression (AFAIK) is on a block level rather than a file level.



ZFS compression is for sure at the block level - it's fully transparent to the userland tools.


It could be at a file level and still transparent to user land tools, FYI. Depending on what you mean by ‘file level’, I guess.


Windows NTFS has transparent file level compression that works quite well.


I don't know how much I agree with that.

The old kind of NTFS compression from 1993 is completely transparent, but it uses a weak algorithm and processes each 64KB of file completely independently. It also fragments files to hell and back.

The new kind from Windows 10 has a better algorithm and can have up to 2MB of context, which is quite reasonable. But it's not transparent to writes, only to reads. You have to manually apply it and if anything writes to the file it decompresses.

I've gotten okay use out of both in certain directories, with the latter being better despite the annoyances, but I think they both have a lot of missed potential compared to how ZFS and BTRFS handle compression.


I'm talking about the "Compress contents to save disk space" option in the Advanced Attributes. It makes the file blue. I enable it on all .txt log files because it is so effective and completely transparent. It compresses a 25MB Google Drive log file to 8MB


That's the old kind.

It's useful, but if they updated it it could get significantly better ratios and have less impact on performance.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: