Hacker News new | past | comments | ask | show | jobs | submit login

SSDs.

Virtualization.

Git. I've used CVS and SourceSafe before.

LVM. The old MBR way of partitioning is just awful.

systemd and journald, just to be a bit controversial. I really don't miss my days of screwing around with init scripts, or having to parse logs by hand.

Arduino. It makes lots of cool stuff very accessible.

sshfs, it's amazingly convenient.

pulseaudio. Sound on Linux finally works. I spent an unbelievable amount of time fighting with it before.

valgrind. Amazing for debugging memory issues.

Modern hardware. It's only recently that I'm no longer tightly restricted by RAM, disk space or the CPU. I remember the hours spent on freeing up conventional memory, a kernel taking 3 hours to compile, and having room for half the stuff I wanted to install.




SSDs did it twice - the improvement from spinning disk to SSD it desktops was amazing, and then NVME/M.2 took it ANOTHER 10x - the speeds for storage are now so insanely fast that some of the trade-off decisions no longer apply.


+1 for git, I learned it at my second job and I remember at my previous position using window shared network drives as a very basic versioning control system


I absolutely love systemd and journald. The fact that you can do most things declaratively and have a common interface for logs, it's just beautiful. I can do things like "show me the logs for this service for the last 5 mins".

I still don't fully understand why people prefer bash scripts.

But then again, I have never once needed to edit those bash scripts since I usually don't stray too much from defaults.


It all depends of course on what you do. When you try to ship stuff for various Linux distros, having to figure out how SuSE, Red Hat and Ubuntu like doing their init scripts is a pain.

Admin-wise it's nice not to ever have to look at a 3-way merge of stuff in /etc/init.d if somebody touched anything, and then the system was upgraded.

But besides that there's lots of cool benefits. Like not needing to deal with PID files, the fact that you always know what a random process belongs to, and that if you want to stop something, nothing will be left behind. Plus all the nice resource limits, auto-restart and other useful parameters that can be applied to anything.


It may make the lives of distro developers harder, but doesn't it make the lives of sysadmins (of which are probably 100,000x more than distro developers) easier?

And we always wax poetic about how doing things in a declarative way ultimately leads to a better experience. So I am always perplexed when people give systemd a hard time when that's exactly what systemd has done, it's established a standard and consistent way to declare services.

Speaking of other things that seem to not want to change, why in 2021, a decade after PowerShell, do we still pipe unstructured text between processes and perform gymnastics with sed/awk/print, does that not just feel extremely janky to people? I'm not remotely a fan of Windows servers, but PowerShell got this right from day one since it started from scratch with the idea of piping objects back and forth. Can we not as a community at least have standardized flags to output the various standard commands as maybe JSON? and take inputs as JSON as well? maybe this has already been tried but I don't see it.


> LVM. The old MBR way of partitioning is just awful.

Just wait until you find out about ZFS!


Honest question: can you explain the practical advantages of ZFS over LVM + Ext4? I understand that ZFS combines a filesystem and package manager, but why should I, a humble OSS user, care?


Not all of ZFS's advantages necessarily apply to "humble" uses like your home PC, but I can think of a few that surely do.

Filesystems share all available space in the pool. Say you have 16GB space available, so with LVM, you might make an 8GB /home and an 8GB /var. Later on, you've run out of space in /home while still having 4GB free in /var. What do? Well, you can probably get by with resize2fs followed by lvresize ... but this is not only cumbersome, but prone to catastrophic failures. With ZFS this is simply never an issue.

Snapshots. LVM has them too, of course, but they're not aware of files so they're far less useful. With ZFS, you can simply run `zfs diff` and it will pretty quickly tell you which files were added, modified, and deleted.

Building on this snapshot functionality, incremental backups become a breeze with `zfs send` and `zfs receive`, obviating the need for clunky (by comparison, at least) solutions like rsnapshot.

ZFS actively detects data corruption, by checksumming everything. Assuming you have redundancy (mirrors or raidz pools), it also automatically fixes them. It's often said that this is only relevant to "enterprises," but I'd disagree: I don't want to discover my personal files have been corrupted any more than an enterprise does their business files. A better argument is that this protection is incomplete if one is not using ECC RAM, however, a partial counter to that is that a lot of recent consumer hardware (namely, AMD's) does support ECC RAM.

Overall I'd say comparing ZFS to LVM+ext4/XFS/whatever is kind of comparing, say, Windows 95 to Windows 2000 (or Unix, if you prefer). It's just not a fair comparison, they're not even playing the same sport. A much more apt comparison would be to BTRFS, but ZFS has been rock-solid for longer than BTRFS has been in existence, and the latter has been eating people's data left and right for most of that time.

On the other hand, one downside to using ZFS at home is that you can't really add one drive at a time. Say you have a raidz2 (similar to RAID-6) pool with 6 drives in it. You can't just add a seventh. The ideal way to grow such a pool in ZFS would be to add a whole nother raidz2 "vdev", which would then be striped with the first one. That means buying 6 new drives, not 1.


Thanks so much for this detailed explanation, it really helped underscore the differences! Do you run ZoL or use FreeBSD?


I use both. On Debian stable, ZFS is a piece of cake, because the kernel version never changes. You just install the ZFS package (which is actively maintained) and call it a day. I imagine on something like Arch it could become quite a pain, though (never tried it myself).


A few more questions if you'll indulge me. First, how do you actually install ZFS on Debian? I understand you can install the ZFS package, but that in and of itself won't change the underlying filesystem on disk. Also, doesn't Ext4 also support checksumming? (See https://ext4.wiki.kernel.org/index.php/Ext4_Metadata_Checksu...).


Well, yes. You can't just convert filesystems in place, whether ZFS or otherwise. After installing the package you can create ZFS pools and filesystems. If you want to replace existing filesystems you would move all your stuff over after the fact.

Booting from ZFS will require some extra steps. I actually cheated here and just left /boot and / on ext4, using ZFS for /usr, /home, /var and everything else, so I don't have firsthand experience with this (on Linux anyway, on FreeBSD you can just select ZFS during installation). But if you do want that, it doesn't seem too difficult, https://openzfs.github.io/openzfs-docs/Getting%20Started/Deb...

As far as ext4 checksumming, as your own link says, it only checksums metadata -- actually, a subset of the metadata apparently. ZFS checksums everything, meaning your actual data. Further, since ext4 is not aware of redundancy -- you'd be using lvm or mdraid for that -- it's not clear what it can do if it does detect an error. With ZFS these errors are fixed automatically.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: