Hacker News new | past | comments | ask | show | jobs | submit login

https://github.com/Lillecarl/nixos/blob/master/shitbox/disko... this is my declarative partitioning scheme, I use mdraid, LUKS, LVM and btrfs. I also mirror my bootloader so if one drive dies I can still boot :)

Hardware raid is legacy :)




That's only until the machine in question is 5000km away and the soonest time you can get to it is in the three months.

Sure, for personal usage there is almost no usage for the HW RAID, but when you need to make sure what the system would always boot and it can't be serviced in hours/days - then you have almost no options for SW RAID.


Incorrect.

No problem to put /boot on a raid1 on a small partition across all drives, so that any drive can boot, and no problem to even include a whole self-contained remotely accessible recovery os. It's a little bit more work to set up, but if you are professing a need for that, then a little extra setup is de rigueur. I remotely administered a ton of linux boxes in racks scattered across the US like that for years. Although I had out of band serial console access and could do full bare metal reinstall that way, I could also do it from any neighboring machine that was still running in the same rack if I had to, with a combination of network booting and/or booting from any one of the normal drives normal raid1 copies of the /boot partition.

Further remote-able fallback options that I never even had to use but could: Local hands just plucks a hot-swap drive from any of my other machines and pops it into the bad machine. All drives had the same bootable partition and all drives are redundant and so they could yank literally any one from the wall of server fronts. Or, better, local hands just plugs in a thumb drive and I take care of the rest. Thumb drive is already sitting there for that purpose, or they could make a new one from a download. But with 8 to 24 hot-swap drives per machine, meaning 8-24 copies of /boot, I never even once needed local hands to so much as plug in a thumb drive.

There is just no problem at all with sw raid. It only provides options, not remove them.


> Incorrect

Did you even read my comment? It's quite clear what your environment was in the data-centers, with spares and remote hands.

Mine wasn't and then I say three months I don't kid or jest.

> No problem to put /boot on a raid1 on a small partition across all drives

This is exactly the problem. If the drive isn't totally dead (like it doesn't even respond to IDENT) then there is a chance what the BIOS/UEFI would try to boot from it and even succeed in that (ie would load the MBR/boot app) and then there is no way to alter the boot process at this point. HW RAID card provides a single boot point and handle the failing drives by itself, never exposing those shenanigans to the upper level.

Like sure, you are happy with your setup, you never had a bad experience with it, you always had OOB management and remote hands - but it doesn't means what it a silver bullet working 100% of times for everyone.

Yes, I've seen systems with SW/fake RAID failed to boot because the boot process failed after selecting a half-dead drive as a boot device, with my own eyes. Thankfully I was geographically close to them, not 5000km away.

Yes, I serviced and prepared systems for the 5000km away divisions and they are really serviced only a couple of months in a year, all other time you need an extremely urgent reasons why do you need to a rent a heavy 'copter to go there. No, there is no remote-hands there. The maximum point of IT-competency there is raking bills with satellite Internet.


The house could also burn down. The point was there is nothing hardware raid makes uniquely possible, or even merely better, or even merely equal.


Never used disko, are there any gotchas? Will it format my drivers if I run nix rebuild?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: