Hacker News new | past | comments | ask | show | jobs | submit login
Domesticating Kubernetes (blog.quickbird.uk)
123 points by ClumsyPilot on May 1, 2020 | hide | past | favorite | 95 comments



Why would you do this for simple home apps? k8s is complete overkill. At home, there is little reason to drag in all that complexity to host a blog or whatever.

Kubernetes drags in a tremendous amount of complexity, background knowledge, and strange constraints, even on small installations. I really do not see its benefit for small apps and projects.

Are you hosting your blog? You don't need k8s. Are you running a small app server at home? Nope, don't need it. Are you running an auto-scaling app with many hundreds or thousands of worker instances, and need to support non-disruptive roll-in upgrades across multiple clouds and have many services which need to scale independently? Maybe you need k8s, but only in some circumstances.

For the record, I run many k8s clusters across several clouds, using 1,000 - 50,000 cores at any one time, so I've dealt with quite a bit of k8s complexity, and I'm still on the fence whether it's the right answer for us, but it's allowed us to standardize our software to k8s and only worry about getting that running well in each deployment, which put the cross-cloud work on the infrastructure teams versus the software development teams. The price you pay for this is that you still need to do non-k8s stuff if you need some cloud specific resources more than simple compute and routing, so you have both k8s and cloud specific code.


I'll bite.

One thing I found in running my home lab was that I kept having to burn it to the ground & rebuild it regularly anyway. For example, dist-upgrade never really works: every time I try I just end up wasting a couple days wrestling with it and then giving up and rebuilding the machine from scratch. Even if I just assumed that I'd have to build from scratch every time, differing app and library versions meant that I couldn't count on a new build being a simple, clean install.

So going with the regular "run things as a daemon on a server" model wasn't actually saving me that much time.

Basically I could to do one of two things:

1. keep using purpose-configured machines and spend a bunch of time writing ansible scripts to automatically re-create them when everything goes pear-shaped and then re-write the scripts when a new version changes stuff.

2. container-ize all my tasks, and make everything else vanilla and effectively disposable. I have to spend some initial time to rewrite my stuff in container-speak, but that's a one-time cost.

Presented like that, option 2 looked like a better option. When a machine has a problem or needs to be rebuilt, I build it with the completely vanilla setup (ubuntu lts, conjure-up k8s) and push my k8s jobs and pods up to it. That's 2 hours instead of a day and a half. (Yes, in theory I could docker-ize everything and run docker-swarm, but it's a small step from there to k8s, and conjure-up makes installing k8s fairly straightforward.)

Frankly, I have a family now, and fucking with fiddly settings and library dependencies isn't fun anymore. I'd rather spend that time doing stuff. k8s lets me divorce my hobby work from the infrastructure, and I like that.


I think you could achieve something similar but by just using simple standalone containers that you start with Podman or Docker. Once you have the OS up and running, then all you need to do is push the configuration to the server (I just rsync systemd service files) and start the services. Persistent data is stored elsewhere anyway, so moving a service is just a matter of copying a directory over to another machine and ensuring that the permissions are correct.

k8s is probably great for many things, but as time goes on I really appreciate the ability to debug things effectively and by keeping my homelab setup as simple as possible I avoid the complexity of whatever advanced solution that is out there.


I could, but with the same effort to set that up, I could have a full k8s setup...it doesn't actually save me any time to do it the "simple" way.


I'll second you. I full time admin our K8s infrastructure and clusters. It's a great solution for our production workloads and I've bootstrapped RancherOS/k3s a handful of times to look at using it at home but K8s is purpose built for cloud. I can get all the same benefits with significantly reduced complexity with Docker + Systemd at home.


what are some usecases where you need systemd alongside docker containers?


systemd can restart your container if it crashes. It can also start it if you reboot your system.


can you set a restart policy with just Docker?


I don't know. But if you can, that still leaves you the second use case: starting your container on reboot.

Can you have Docker do that too?


Learning?


That is a fine reason!


I've given up running my own k8s at home a couple of times. It does seem to be getting easier, but then upgrading and maintaining breaks me again. + the other stuff like running your own container registry. Then with new centos podman breaks everything that worked OK in docker. I hate this stuff, so many problems its worth ignoring the whole stack.


I do not blame you. I try to avoid it as long as it is possible. The amount of time, energy and effort wasted on this is just insane.


Hosted k8s has changed my life. Let me focus on my business and not my infra.


Have you had any serious yet? If you are on the happy path does not really matter what you use.


Any serious what? I am running a mildly successful SaaS business on hosted kubernetes, yes.


Sorry, HN made me retry couple of times. Any serious troubles with Kubernetes. Have you had any of those?


No I have not. Going on 3 years now?


Great. So you do not have any experience with the worst-case scenario. That is exactly my point. I would like to hear your opinion once you fixed an outage that is caused by K8s.


Well what is this outage caused by?

Not understanding how k8s work, or actual provider screwing up?

if the former - I try to read carefully and test my workloads in a test cluster before I do it in prod.

if the latter, I think google has excellent operational excellence.

This is like saying you wouldn't take driving advice from me until I total a few cars. Perhaps I didn't total a few cars because I make good driving decisions?


An outage that is caused by k8s. You can pick a random from this list. https://github.com/hjacobs/kubernetes-failure-stories

>>This is like saying you wouldn't take driving advice from me until I total a few cars

No, I am saying you do not have enough experience with k8s. That is all. Totaling a car implying you are driving, using k8s implying that whoever wrote that particular part of k8s is driving. You have some control but ultimately you need to understand what is going on. As that list shows many companies who put k8s into production learned it in a hard way.


Sorry I’m not following

Should I compile a list of failure stores for companies that ran Linux because Linux is complicated or has had various bugs?

How about a list of failure stories of how Ruby causes problems?

You can get errors with any system. You would need to prove to me that k8s has say more errors per day than a competitor (lambda? VMs). I just don’t understand...


>> Hosted k8s has changed my life. Let me focus on my business and not my infra.

This is the claim that you made. I would argue that since you do not have experience with it you perceive it as you do not need to focus on your infra. Once you run into a failure with k8s and you actually need to fix it, this might change. That is all.


Why would it

I have run into Linux failures. I still use Linux.

I have ran into bugs in Java. I still use Java when it makes sense.

So not sure what you are trying to get to. You give up on a technology the first time you have a problem?


expensive though?


Not really, total cloud costs are under 5% of my revenue.

If I want to shave pennies, I have many places to shave before the very thing that is the heart of my business.


what do you end up using instead of k8s?


Plain old systemd unit files work great. If you really need containers for some reason and love UID and GID mapping then podman in systemd is great.

The only reason I’ve found to run k8s at home is if your full time job is operating k8s at work.


I mostly use docker compose. Perfect fit for tinkering but not over complicated and huge ram eater.

Also:k3s might be an option.


Compose is great for home use.

If you run your home server on a separate box, like a RaspPi or NUC, docker-machine via SSH can make provisioning and updating a tad easier.


I've plugged it before, but Portainer (portainer.io) is what finally got me into running containers for apps. It presents a really nice interface for running individual containers or stacks including docker-compose files, and it extends the nerd knobs for doing things like custom networking or filesystem mounts. I paired it with Ouroboros for doing automated container upgrades in the background, and the result is that I have a solid set of container apps running at all times that I never have to touch.

Were I learning k8s for work, it wouldn't be a good solution, but for simple home apps that you want to fire and forget, it definitely beats doing a hand-made installation and periodically care-and-feeding them.


I've used Nomad by HashiCorp and it's been really nice to use. Though I haven't needed to scale it up a ton so I can't comment on that, but the day-to-day deploying of updated apps is nice and works well.


docker run -d --restart=unless-stopped


I can attest to RPI i/o speed: its horrible. Combined with the fact that I have to build Docker containers on the RPIs themselves(because of arm), its more of a hassle than a cool add-on (to be clear I'm running Docker Swarm in a way that isn't too different from the setup of OP).

Only reason I can see RPIs in kubernetes is if you're exclusively using ARM everywhere and/or are running some distributed cluster among different locations (like Chick-fil-a).


You can actually build ARM images on your x64 machine using newer versions of Docker with BuildKit built into it. I think it uses QEMU under the hood.

I recently did this at work, using a single Dockerfile to built multi-arch images for ARM and x64 - it's pretty nice!


Previously I've done this, but found it takes a bunch of time. I'll give it another go, though and see how it goes!


Is there an RPi alternative (ie. similar size, form etc.) that has better IO/Network speed?

I used RPis for my robot (https://sendc.at/dl/Kjjbt3dij6T733YgWsyv3p3Vv5GPGKO1q5IEFStV...) with daughter boards for motor control) and they wre pretty convenient, but never really considered running anything server-like on them because they have reportedly woeful network performance.


I have this https://www.hardkernel.com/shop/odroid-n2-with-4gbyte-ram/

It's pretty powerful for the cost and has no fan etc. Arm64.


Those look sweet, thanks!


The pi 4 has gigabit ethernet. I'm not sure why you'd think it has "woeful network performance". The 3B+ could only do a couple hundred megabit, which was somewhat mediocre, though.


If they've fixed it with the 4 then that's great, ISTR in previous versions the Ethernet was backed by the shared USB controller (https://raspberrypi.stackexchange.com/questions/45130/why-do...), hence my question.


The shared USB controller wasn't the issue, the lack of USB 3 was.


OK, "the use of a backing bus that wasn't fast enough".


You don't have to build on the RPi's themselves: https://docs.docker.com/buildx/working-with-buildx/#build-mu...


Don't build stuff on the Pi! You just need to set up a cross-compiler toolchain and compile it on your much more capable AMD64 machine.


The RPi 4 can saturate gigabit with ease, it's in a different league compared to the RPi 3 and lower, avoid those like the plague.


This is not much of a problem if you switch out of the 32bit Raspbian. Most images will work out of the box on ARM64


I use drone.io to build all my arm images for my pi cluster.


What I miss from all of these tutorials is one very important piece - how to handle network routing and dns automation within your home network, that's in typical scenario is being handled by the ingress/cloud controller. Without having automated (or easy enough) way of reaching the apps you're deploying there, each of these clusters is pretty much useless for users except for maybe learning basics of k8s, what's easier done with minikube.


I use MetalLB to allocate RFC1918 IPs out of a dedicated pool to LoadBalancer services. MetalLB then publishes these to my router over BGP because, you know, why not?

I then have external-dns running (https://github.com/kubernetes-sigs/external-dns) which manages the relevant A/CNAME records on Google DNS (other DNS providers are supported) so that I can resolve "myservice.mydomain.com" to the service's IP address.

I wrote a bit about the BGP bit last year: https://www.growse.com/2019/04/13/at-home-with-kubernetes-me...

Admittedly, I have no desire to expose any of these service to the internet, but if I did I could use an IPv6 address on the service instead, or add a static NAT rule to the router to forward traffic to the service IPv4 address. Auto provisioning of NAT rules feels icky, so I'd probably go down the ipv6 route if I wanted to do this.


Thanks for pointing it out - I tried to cover this area and drew a diagram, but on second through, I don't think I managed to do it justice.

I have a static LAN IP for an Ingress Controller, say 192.168.2.100. All HTTP/HTTPS requests are port-forwarded to it from my router. From there, the Ingress is in charge of directing the request based on domain name.

As for DNS, I use a single domain name, and have a record for a wildcard subdomain - so any subdomain will end up at my router, I don't have to configure anything at my DNS registrar when I add yet another application, as long as it's using a subdomain.

ExternalDNS is a superior solution, but most people will only have 1 or 2 domain names.


Few things to do:

- specify a LAN IP for your ingress controller so it doesn't change.

- Use ddwrt/dnsmasq to point *.k8s.myhomenetwork.local to said IP

Once that's configured, you just configure the ingress hostname on services as you would "normally".


Also - do not use .local tld. It is a reserved one (RFC6762), for mDNS/Bonjour:

> This document specifies that the DNS top-level domain ".local." is a special domain with special semantics, namely that any fully qualified name ending in ".local." is link-local, and names within this domain are meaningful only on the link where they originate. This is analogous to IPv4 addresses in the 169.254/16 prefix or IPv6 addresses in the FE80::/10 prefix, which are link-local and meaningful only on the link where they originate.

Microsoft used to recommend .local TLD for AD deployment as a best practice, and nowadays there are companies stuck with this decision. Do not make the same mistake; unlike companies, you probably want your zeroconf stuff to work.


Iirc, the last time I looked into this, .test is the recommended TLD to use because it’s reserved for non-production use I.e. it will never be bought or sold.


So what breaks if you use "*.[lastname].local" for your home network?


On zeroconf aware systems, it is still expected to be resolved via multicast; service discovery works by looking up srv/ptr/txt records on __$service.__$protocol.$hostname.local.

How it will behave will depend on your specific stack. Zeroconf aware (Macs, iOS devices, Linux with Avahi - i.e. most modern distributions) one will use multicast, zeroconf unaware (Windows) will use your DNS resolver. Devices (printers, etc) are a toss of a coin.


I'd like to note that a default behavior of Avahi in Debian/Ubuntu/RH/SUSE prevents resolving *.local via unicast DNS to avoid this collision.


I use dynamic DNS with my Edgemax router. Something like inlets operator or cloudflare Argo are other options.


I think the OP might be more asking about how to get local DNS automatically provisioned with the IP address of a service/container that has been deployed to the LAN.


That's exactly what I meant. I know how to do it myself, but then again I'm not the target for these articles, but when I read them with the mindset of someone who is supposed to have a use of them, that's always the big missing bit for me.


Did davestephen's comment help though? Apart from suggesting a .local domain... that's a bad idea, try .lan instead.

Then you could type (for Plex, for example) plex.k8s.lan

Maybe we need the equivalent of traefik for DNS?

https://news.ycombinator.com/item?id=23040478


Dns is part of stock k8s. It usually runs on the 10th ip of the services network.

It's the first service that goes up after you've initialized the cluster and initially serves only internal requests.

Nothing stops you from pointing the matching lookup zone to the internal dns of k8s, however. I've done it before and it worked great for lan requests.

If you want to expose it to the internet however, an automatically configured dns is probably not what you want, unless you actually have a public ip range to use with said services. In that case, the original comment makes more sense and you'd just add a wildcard dns to your ingress controller, which can be traefic or whatever else you want


I know how to solve this problem, so I wasn't commenting on his comment, but rather on the fact that none of these tutorials so far solve this problem for new users that they target.

Aside from that, I don't think what he proposes is a great solution, because what we'd need is the automated way for deployments to get that DNS created (or announced) for their IP when they get it or when it changes. Having it done manually and being static is vastly different in usability from k8s does with ingress/cloud controllers.


I think what type of k8s environment you use very much depends on what you're looking to get out of it.

If it's experience deploying applications into containerized environments, then micro-k8s and k3s seem like reasonable choices, you don't really care about the setup of the underlying components, just that they present the k8s API.

If you're looking for experience of managing k8s clusters, then either the distribution you're looking to run in prod. or something like kubeadm are perhaps a better option. kubeadm is very "vanilla" in terms of how it's deployed so it's quite representative of production (on-prem) deployments, perhaps unlike k3s which makes changes to how k8s works.

If you're looking to quickly test things in k8s, I'd recommend kind as the easiest way to stand up and remove clusters quickly.

And if you're looking for something to run your home services long term, I would recommend not using Kubernetes :) (unless you have a really complex home network which might justify adding k8s to the mix)


I think you are right on the money - the effort of hosting it at home is only worthwhile from educational standpoint I do try to keep things as simple as possible so that I could re-deploy the cluster quickly. Also learning to debug issues in kubernetes is a skill in it's own right.

Regarding Kubeadm - some storage solutions don't even work on K3S, or at least i don't see how you can make them work. For example EdgeFS Rook integrated CSI driver requires you to deal with feature gates.


> if you're looking for something to run your home services long term, I would recommend not using Kubernetes

If we disregard one’s experience with Kubernetes as a factor, are there any other reasons you see to not use k8s at home?


I suppose it depends on what your intentions are:

* "I want to run my stuff from home indefinitely, in the background": Don't use kubernetes. You'll spend ages setting up your cluster, you'll be fighting to keep it up. Most things you'll want to run probably have an OS package and run happily as systemd services. It's unnecessary overhead, both on the hardware, and mentally. Internal certificates [used to, at least] expire after a year on kubeadm created clusters, so if you don't look at it occasionally your control plane goes down. It doesn't handle the case where all your machines are nearly at full capacity (i.e. >85% disk, or nearly all memory used) and the defaults are to kick things off nodes with "pressure" which is a huge PITA when you don't have a dynamically scalable cluster - nothing makes my blood boil than seeing 200 lines of "Evicted" in kubectl . It's designed for huge workloads in datacentres, after all. You really don't need it for hosting a single user blog and a NAS.

* "I want to set up a homelab and learn k8s": Definitely use kubernetes. You'll learn how painful and time-consuming it is to manage onprem installs, but you'll also learn a lot. A lot of packaged kubernetes solutions follow the same patterns (ingress controllers like nginx / ambassador / envoy, istio service mesh, flannel etc for VLANs, prometheus for monitoring, helm / argocd deployments, ...) so it's super useful if you need to use kubernetes at work. You'll come to realize just how much awful, awful stupid rolling-release "this was deprecated on Tuesday and all our config schemas have changed" bullshit you're protected from when using someone else's (i.e. BigCorp's cloud solution) managed kubernetes cloud thing, and screaming "NO" at anyone who starts to utter "on-prem" will become ingrained in you.

:-)


Do you really have that many computers at home, so you need container orchestration for that? A single machine is capable of handling most home workloads.

So outside of playing with Kubernetes for experience, why would you do that?


i have to agree with you on this. i've been using k8s at work for over 4 years but operating k8s for home workload is still too much; unless, it's for tinkering/testing.


complexity and overhead. Maintaining Kubernetes is non-trivial (IMO ofc). It has a 9 month support lifecycle, so you need to redeploy everything regularly. There are also API deprecations to deal with periodically, which if you're not keeping up with k8s developments, could be disruptive.

From a complexity perspective, you're adding more places for things to break. Instead of (say) running your apps in containers with Docker, you add pods, services and ingress as layers, which is more places for things to go wrong.


I’ve spent a decent amount of time fighting k8s at work. The GCP k8s which abstracts away a bunch of things, but still it can be a bit crazy.

Docker compose is really great but only works on a single machine. Also docker compose makes rolling updates a PITA.

What I really want is something like Cloud Run but locally tunable.

Google Cloud Run is essentially: here is a docker image, when you get a http request, spin up a container, when there are no requests for 10 minutes, spin it down. If there are more than 80 req/s, replicate to handle the spike.

Now I want to say, here’s a bunch of Ubuntu machines with X cores, Y RAM and Y SSD storage. Go run these images on it and auto scale it up and down.

I know k8s was meant to solve this problem but the layers of abstraction on top of abstractions are insane.

I just want to specify my compute/storage pool of machines, my docker images, how they should connect to each other and scale up/down. Boom! It just works.

Is there something that does this?


you can run a kubernetes stack using k3s on a rpi stack within minutes. and it runs far better than all of these.

https://blog.alexellis.io/test-drive-k3s-on-raspberry-pi/

here's a live walkthrough - https://www.youtube.com/watch?v=DjpVtNjiXSU


The author mentions the differences between k3s and micro k8s in the post and concludes that due to the quirks of k3s (subset of features running in a single binary) it's only really preferable if you want to run rancher.


That is not true. We have personally run k3s on AWS including cluster-scaler and spot instances.

Last I checked, K3s is a fully certified distro of k8s.


The author disagrees with you on both points, he asserts rpi's are way slower than "old" pc's and laptops and that MicroK8S is preferable over K3S. He actually provides arguments & benchmarks to back up those opinions. Could you do the same for yours?


Because I have run k3s in production with spot instances for a particular usecase on AWS with all cloud-y features.

I don't want to make this into an anti stance, but we have totally loved the experience of using k3s.

The way you use k3s generally is not like the way you read on this post. It's basically a single command experience with sane defaults (e.g traefik). The overall experience is very close to a single click experience...all the way to the cloud.


Hello, it's a fine tutorial, but what you stand stand up in minutes is mostly an empty cluster. You still need to deal with exposing your services to the network, etc. That's why I drew the layered diagram, and tried to cover installing what I consider basics services. You have to have them the cluster to be of practical use.

Also, K3S is not just faster - storage solutions don't even work on K3S, or at least i don't see how you can make them work. If you try to install EdgeFS, it's integrated CSI driver requires you to deal with feature gates.


> and it runs far better than all of these.

in what way?


k3s is a lightweight stripped down version of k8s, so it's smaller and faster, and tuned for more resource scarce environments, like an RPi.


I recently gave up on home network clustering/kubernetes. I've since moved to digital ocean. I was working on data heavy apps for testing. I had several SSD zfs pools which was the driving reason, and a lot of RAM to work with.

My setup did work initially. I had a dedicated server acting as the metal lb, about 20$ a month. That was then connected through a wireguard tunnel to my home dmz network. Which backed to a dual xeon v2 work station. The latency was very good under 20ms, and really good speeds. I'm lucky to have fios in my area.

The Xeon workstation died, I then backed up to several think pads. Those were not performant either. So I got several HP t610 nodes, raspberry pi speed with sata 3. Rook took all CPU, and to boot they were running at 90c constantly. Even after re paste and with fan mods. I didn't want five little space heaters next to my desk.

After all this I ditched the home setup. I had gotten parts for a local Epyc server after my xeon's died. But sold them due the current situation.

In the end I had wanted to start a series of blogs on Kubernetes and micro service development. To help me learn and flesh out my under standing of Kubernetes.

I don't feel I wasted several months setting up Kube. I now know about under the cover stuff. Having deployed OkD, Rancher, kubespray, and kube adm. The initial wire guard set up helped cement a lot of the internal networking model. I was mildly acquainted with having worked on openvswitch and open stack before.

If you're doing local. I would really recommend a Ryzen 1600af (6c/12t zen+ 65w), either asrock rack x470, or a basic b450. Both can take ECC, and should land a little node under 500ish or so. There are also SFF PCs Lenovo ThinkCentre M93, comes to mind. But at that price 90$ a node, I'd rather move up the stack a little bit.

If you're waiting to buy local. DigitalOcean has been very well priced. If you want to learn the internals, grab a dedicated server and set up KVM, to get familiar with the internals. On the upper end you can get a new ryzen 3rd gen dedicated for 90$ a month. I look at it as a 90$ class I take once.

I'm not saying Kube is always right. But I avoided it for a long time, backing up to docker swarm. Now, I feel at a base it's not much more complicated than docker swarm, now that I've done this deep dive. Kube adm is on ease of use with docker swarm to me. Add in Metal LB, and Gitlab to help manage the cluster, you have a personal little cloud. It's also good to know for future job searches.


All of these trap your mind in the k8s spiral arm and watch how adroit I am at building something no one would want to troubleshoot articles are hipster reminders of why I only create occasional throwaways for this site.


Curios how the benchmark compares when adding SSD on RPI 4? https://jamesachambers.com/raspberry-pi-4-usb-boot-config-gu...

I don't think it beats anything, but I am sure IO improves fairly significantly.

One of the things I found to be a problem is that most container images found on different registries are built for x86_64. You would need to rebuild those containers yourself on the RPI.


Happy to see this. I think more orgs should be running more on-prem or colocated in a data center somewhere. Public clouds can be a rip off for larger projects


There’s a massive difference between running a Kubernetes setup, like the one in the article, for you home lab, and the attempting to run Kubernetes on-prem for a business. I would recommend against the latter, unless your sure you can afford it, in term of staffing.


question: if not k8s, what everyone has been using to orchestrate containers at home? Ideally, I just want something easy to setup, low maintenance cost and easy to backup and restore if sth went wrong.

i've been using k8s at work for over 4 years but for home usage, it's just too much.



Nomad


Can you post a short, high-level overview of how you use Nomad? What does it do for you exactly?


HomelabOS


docker-compose


but docker swarm is kind of dead, isn't it? i need to orchestrate on multiple hosts.


[flagged]


Swarm is dead in the water.

No progress and there's a ton of basic features that aren't being supported. In my use I have to literally remove every single node and reset the cluster if 1 goes down (I run into some weird grpc error via swarm). Combined with weak GPU support.... its there for basic use, but not for anything "intermediate" or higher.


Kubernetes provides a standard way to deploy applications. If I’m familiar with deploying applications on Kubernetes at work, why not have the same for my side projects at home?


The same can be said of deployment mechanism that one is accustomed to.

GP's point is around time to setup and complexity involved.


...over simpler approaches, I should qualify.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: