If you're running on a single host anyways, why not just use init scripts or unit files? All Kubernetes is giving you is another 5-6 layers of indirection and abstraction.
EDIT: Quick clarification: still use containers. However, running containers doesn't require running Kubernetes.
> learning Kubernetes can be done in a few days
The basic commands, perhaps. But with Kubernetes' development velocity, the learning will never stop - you really do need (someone) dedicated part time to it to ensure that a version upgrade doesn't break automation/compliance (something that's happened to my company a few times now).
> If you're running on a single host anyways, why not just use init scripts or unit files?
You're absolutely right. Init scripts and systemd unit files could do every single thing here. With that said, might there be other reasons?
The ability to have multiple applications running simultaneously on a host without having to know about or step around each other is nice. This gets rid of a major headache, especially when you didn't write the applications and they might not all be well-behaved in a shared space. Having services automatically restart and having their dependent services handled is also a nice bonus, including isolating one instance of a service from another in a way that changing a port number won't go around.
Personally, I've also found that init scripts aren't always easy to learn and manage either. But YMMV.
> The ability to have multiple applications running simultaneously on a host without having to know about or step around each other is nice.
If you're running containers, you get that for free. You can run containers without running Kubernetes.
And unit/init files are no harder (for simple cases like this, it's probably significantly easier) than learning the Kubernetes YAML DSL. The unit files in particular will definitely be simpler, since systemd is container aware.
I'm extremely cynical about init scripts. I've encountered too many crusty old systems where the init scripts used some bizarre old trick from the 70s.
Anyway. Yes, you're once more absolutely correct. Every thing here can be done with unit scripts and init scripts.
Personally, I've not found that the YAML DSL is more complex or challenging than the systemd units. At one point I didn't know either, but I definitely had bad memories of managing N inter-dependent init scripts. I found it easier to learn something I could use at home for an rpi and at work for a fleet of servers, instead of learning unit scripting for my rpi and k8s for the fleet.
It's been my experience that "simple" is generally a matter of opinion and perspective.
75% of this is boilerplate, but there's not a lot of repetition and most of it is relevant to the service itself. The remaining lines describes how you interact with Docker normally.
In comparison, here's a definition to set up the same container as a deployment in Kubernetes.
Almost 90% of this has meaning only to Kubernetes, not even to the people who will have to view this object later. There's a lot of repetition of content (namely labels and nested specs), and the "template" portion is not self-explanatory (what is it a template of? Why is it considered a "template"?)
This is not to say that these abstractions are useless, particularly when you have hundreds of nodes and thousands of pods. But for a one host node, it's a lot of extra conceptual work (not to mention googling) to avoid learning how to write unit files.
That's a great example! Thank you very much for sharing.
That said, it's been my experience that a modern docker application is only occasionally a single container. More often it's a heterogeneous mix of three or more containers, collectively comprising an application. Now we've got multiple unit files, each of which handles a different aspect of the application, and now this notion of a "service" conflates system-level services like docker and application-level things like redis. There's a resulting explosion of cognitive complexity as I have to keep track of what's part of the application and what's a system-level service.
Meanwhile, the Kubernetes YAML requires an extra handful of lines under the "containers" key.
Again, thank you for bringing forward this concrete example. It's a very kind gesture. It's just possible that use-cases and personal evaluations of complexity might differ and lead people to different conclusions.
> There's a resulting explosion of cognitive complexity as I have to keep track of what's part of the application and what's a system-level service.
If you can start them up with additional lines in a docker file (containers in a pod), it's just another ExecStart line in the unit file that calls Docker with a different container name.
EDIT: You do have to think a bit differently about networking, since the containers will have separate networks by default with Docker, in comparison to a k8s pod. You can make it match, however, by creating a network for the shared containers.
If, however, there's a "this service must be started before the next", systemd's dependency system will be more comprehensible than Kubernetes (since Kubernetes does not create dependency trees; the recommended method is to use init containers for such).
As a side note, unit files can also do things like init containers using the ExecStartPre hook.
Even here on HN, one need not look far to find someone who resents everything about systemd and insists on using a non-systemd distro. Such people run systems in real life, too.
Should you run into such a system, you're still just writing code, and interacting with a daemon that takes care of the hardest parts of init scripts for you.
There are no pid files. There are no file locks. There is no "daemonization" to worry about. There is no tracking the process to ensure it's still alive.
Just think about how you would interact with the docker daemon to start, stop, restart, and probe the status of a container, and write code to do exactly that.
Frankly, Docker containers are the simplest thing you could ever have to write an init script for.
It gives you good abstractions for your apps. I know exactly what directories each of my apps can write to, and that they can't step on each others' toes. Backing up all of their data is easy because I know the persistent data for all of them is stored in the same parent directory.
Even if the whole node caught on fire, I can restore it by just creating a new Kubernetes box from scratch, re-applying the YAML, and restoring the persistent volume contents from backup. To me there's a lot of value over init scripts or unit files.
> I know exactly what directories each of my apps can write to, and that they can't step on each others' toes
You can do this with docker commands too. Ultimately, that's all that Kubernetes is doing, just with a YAML based DSL instead of command line flags.
> Even if the whole node caught on fire, I can restore it
So, what's different from init/unit files? Just rebuild the box and put in the unit files, and you get the same thing you had running before. Again, for a single node there's nothing that Kubernetes does that init/unit files can't do.
> You can do this with docker commands too. Ultimately, that's all that Kubernetes is doing, just with a YAML based DSL instead of command line flags.
Well, I mean, mostly. You're gonna be creating your own directories and mapping them into your docker-compose YAMLs or Docker CLI commands. And if you have five running and you're ready to add your sixth, you're gonna be SSHing in to do it again. Not quite as clean as "kubectl apply" remotely and the persistent volume gets created for you, since you specified that you needed it in your YAML.
> So, what's different from init/unit files? Just rebuild the box and put in the unit files, and you get the same thing you had running before. Again, for a single node there's nothing that Kubernetes does that init/unit files can't do.
Well you kinda just partially quoted my statement and then attacked it. You can do it with init/unit files, but you've got a higher likelihood of apps conflicting with each other, storing things in places you're not aware of, and missing important files in your backups.
It's not about what you "can't" do. It's about what you can do more easily, and treat bare metal servers like dumb container farms (cattle).
> You're gonna be creating your own directories and mapping them into your docker-compose YAMLs or Docker CLI commands.
You don't have to create them, docker does that when you specify a volume path that doesn't exist. You do have to specify them as a -v. In comparison to a full 'volume' object in a pod spec.
> And if you have five running and you're ready to add your sixth, you're gonna be SSHing in to do it again
In comparison to sshing in to install kubernetes, and connect it to your existing cluster, ultimately creating unit files to execute docker container commands on the host (to run kubelet, specifically).
> apps conflicting with each other
The only real conflict would be with external ports, which you have to manage with Kubernetes as well. Remember, these are still running in containers.
> storing things in places you're not aware of, and missing important files in your backups.
Again, they are still containers, and you simply provide a -v instead of a 'volume' key in the pod spec.
> treat bare metal servers like dumb container farms
We're not talking about clusters though. The original post I was responding to was talking about 1 vm.
I will agree that, when you move to a cluster of machines and your VM count exceeds your replica count, Kubernetes really starts to shine.
EDIT: Quick clarification: still use containers. However, running containers doesn't require running Kubernetes.
> learning Kubernetes can be done in a few days
The basic commands, perhaps. But with Kubernetes' development velocity, the learning will never stop - you really do need (someone) dedicated part time to it to ensure that a version upgrade doesn't break automation/compliance (something that's happened to my company a few times now).