I'm going to post my unsolicited opinion about K8s here, as an SWE+SRE who used it heavily for about 1.5 years on GCP.
It's a very cool system. I completely understand why people half-jokingly call it a "distributed operating system." It does a lot of things related to the lifecycles of various state (secrets, storage, config, deployments, etc).
However, I believe it goes way too far into putting infrastructure into non-cloud-managed state machines. Things that exist in most modern clouds are being reinvented in K8s. What's more, is that K8s objects are being created as interfaces to the underlying cloud objects. So you now have 2 layers of abstractions, each with their own quirks. It's too much.
Not to mention that IaC for K8s is extremely immature. This will improve, yes, for some definition of "improve." But if you've ever written Helm charts that integrate with Terraform, you'll know about all the spinning plates you have to keep balanced.
It's not a system I see sustaining into the long term future. Google may continue to use and support it forever, but afaik, they are the most invested in its success. Other cloud platforms, like AWS, seem to be focusing on not re-inventing all of their cloud offerings in K8s.
> Things that exist in most modern clouds are being reinvented in K8s.
My more optimistic view is that vendor specific APIs are being standardized. Initial implementations have their issues, but as more people use them the cloud vendors will improve their offering.
Uniform abstractions are necessarily leaky -- the adoption of the k8s standardization, such as it is, papers over implementation details that serious operators require visibility into.
The problem is that few people seem to understand the infrastructure as code concept, and essentially break the core k8s declarative architecture with imperative workflows that look just like the bash script install insanity we left behind. Workflows that are encouraged by tools like Helm and examples that create k8s objects on the fly without even creating much less retaining the "code" part of IaC.
It turns a tool that escaped the tyranny of endlessly mutating blessed servers with immutable, contained services and unified declarative life cycles, back into an imperative mess of magical incantations that must be spoken in just the right way.
Kubernetes is simple when used as designed, but staggeringly complicated when forced into the mutable imperative workflows it was expressly designed to prevent.
Mind expanding a little on your complaints about Helm? I’ve only used Helm as a templating solution (and even then only to differentiate between local, staging and production), so I’m curious what problems I have to guard against.
Think of Kubernetes like a single application. The config files are the source for that application, the running cluster is the compiled application running on the users computer. By default Helm injects more "compiled" code unrelated to your applications source into the running application. Allowing any tool to alter active cluster state diffuses your single source of truth, your source code, to multiple sources of truth which will not remain in sync with your source unless great care is taken. Moving in sync matters, because that is how you roll back to a known good state when things break.
If you are using Helm to generate source code for your application you still have the added complexity of additional build step, but at least you can choose to add the generated code to your app in a way that tracks with the rest of your code.
Also most Helm chart authors are of varying skill level, and even skilled ones necessarily make incorrect assumptions about your deployment environment. It takes a lot of addition code in helm charts to support more flexibility, so it often get ignored, and you are left with a black box that doesn't quite do what you'd want it to do.
It's a very cool system. I completely understand why people half-jokingly call it a "distributed operating system." It does a lot of things related to the lifecycles of various state (secrets, storage, config, deployments, etc).
However, I believe it goes way too far into putting infrastructure into non-cloud-managed state machines. Things that exist in most modern clouds are being reinvented in K8s. What's more, is that K8s objects are being created as interfaces to the underlying cloud objects. So you now have 2 layers of abstractions, each with their own quirks. It's too much.
Not to mention that IaC for K8s is extremely immature. This will improve, yes, for some definition of "improve." But if you've ever written Helm charts that integrate with Terraform, you'll know about all the spinning plates you have to keep balanced.
It's not a system I see sustaining into the long term future. Google may continue to use and support it forever, but afaik, they are the most invested in its success. Other cloud platforms, like AWS, seem to be focusing on not re-inventing all of their cloud offerings in K8s.