Hacker News new | past | comments | ask | show | jobs | submit login

My home lab sounds pretty similar to the author's - three compute nodes running Debian, and a single storage node (single point of failure, yes!) running TrueNAS Core.

I was initially pretty apprehensive about running Kubernetes on the compute nodes, my workloads all being special snowflakes and all. I looked at off-the-shelf apps like mailu for hosting my mail system, for instance, but I have some really bizarre postfix rules that it wouldn't support. So I was worried that I'd have to maintain Dockerfiles, and a registry, and lots of config files in Git, and all that.

And guess what? I do maintain Dockerfiles, and a registry, and lots of config files in Git, but the world didn't end. Once I got over the "this is different" hump, I actually found that the ability to pull an entire node out of service (or have fan failures do it for me), more than makes up for the difference. I no longer have awkward downtime when I need to reboot (or have to worry that the machines will reboot), or little bits of storage spread across lots of machines.




I fully agree. If you come from "old-school" administration, Docker and Kubernetes seem like massive black boxes that replace all your known configuration screws with fancy cloud terms. But once you get to know them, it just makes sense. Backups get a lot simpler, restoring state is easy and keeping things separated just becomes a lot easier.

That being said, I can only encourage the author with this plan. All those abstractions are great, but at least for me it was massively valuable to know what you are replacing and what an old-school setup is actually capable of.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: