Maybe you don't. I wouldn't recommend anyone to use something extremely complex for mission critical architecture, unless unavoidable. If you feel the same, it's probably a perfectly sound reason to avoid Kubernetes if there's no necessity.
When I wrote "extremely complex" I did't mean all that models and concepts from user documentation - they're the easy part (although I haven't used K8s much and probably missing a lot about its user-facing parts). I mean the internals. Because when things break and cluster goes down it's the internals that matter.
I use Docker instead of plain LXC because it adds convenience of layers and versioning while not introducing any significant complexity. I had some issues with libnetwork (which sometimes behaved oddly when containers crash under high load or docker-proxy gets somehow stuck) but I can live with it.
I also prefer Swarm to manual networking. It's trivial to set up and just works under normal circumstances. And I feel confident that I can fix issues reasonably fast. I had experimented with it under some odd scenarios (like a heterogeneous cluster with a mix of x86_64 and armv7h nodes) and when it had (obviously) failed, I was able to quickly find the relevant source code, read it, understand the details and correct the problem (which was trivial but not documented). And if I wouldn't be able to, I'm absolutely certain I would be able to scrap the cluster and script reprovisioning with standalone nodes, Compose and ad-hoc VPNs - I did this when Rancher (old version) had failed me badly.
Now, maybe I'm just too stupid but I can't say this for Kubernetes. I've tried to experiment with it - also under explicitly unsupported conditions - and tried to dig into its documentation and code when it had failed (obviously it did, I was asking for it). However, I was literally overwhelmed by how much there is. What I've learned is a little tiny bit about CNI internals but the primary result was a conclusion "sorry but nope, that's not something I feel comfortable dealing with if I'm to support it". And I don't mean that it fail-prone or anything - just that every software project has its bugs, and sometimes they manifest themselves at really inconvenient times.
On the other hand, Kubernetes is very nice when it's not you who's responsible for its operation but you're just an user. Spin a cluster on GKE and you know Google would take care of it - you'd just have to write some YAML to describe your project and enjoy. In this scenario, I think I can recommend it - even though I haven't ran anything serious on K8s but some toy projects worked without issues and were easy to deploy and maintain. And Kubernetes feels more mature than other options, having lots of nice functionality, supposedly covering a lot of different use cases and scenarios. E.g. Swarm feels like it lacks a lot of stuff - I've subscribed to a lot of tickets for features that I wanted but that aren't yet there.
When I wrote "extremely complex" I did't mean all that models and concepts from user documentation - they're the easy part (although I haven't used K8s much and probably missing a lot about its user-facing parts). I mean the internals. Because when things break and cluster goes down it's the internals that matter.
I use Docker instead of plain LXC because it adds convenience of layers and versioning while not introducing any significant complexity. I had some issues with libnetwork (which sometimes behaved oddly when containers crash under high load or docker-proxy gets somehow stuck) but I can live with it.
I also prefer Swarm to manual networking. It's trivial to set up and just works under normal circumstances. And I feel confident that I can fix issues reasonably fast. I had experimented with it under some odd scenarios (like a heterogeneous cluster with a mix of x86_64 and armv7h nodes) and when it had (obviously) failed, I was able to quickly find the relevant source code, read it, understand the details and correct the problem (which was trivial but not documented). And if I wouldn't be able to, I'm absolutely certain I would be able to scrap the cluster and script reprovisioning with standalone nodes, Compose and ad-hoc VPNs - I did this when Rancher (old version) had failed me badly.
Now, maybe I'm just too stupid but I can't say this for Kubernetes. I've tried to experiment with it - also under explicitly unsupported conditions - and tried to dig into its documentation and code when it had failed (obviously it did, I was asking for it). However, I was literally overwhelmed by how much there is. What I've learned is a little tiny bit about CNI internals but the primary result was a conclusion "sorry but nope, that's not something I feel comfortable dealing with if I'm to support it". And I don't mean that it fail-prone or anything - just that every software project has its bugs, and sometimes they manifest themselves at really inconvenient times.
On the other hand, Kubernetes is very nice when it's not you who's responsible for its operation but you're just an user. Spin a cluster on GKE and you know Google would take care of it - you'd just have to write some YAML to describe your project and enjoy. In this scenario, I think I can recommend it - even though I haven't ran anything serious on K8s but some toy projects worked without issues and were easy to deploy and maintain. And Kubernetes feels more mature than other options, having lots of nice functionality, supposedly covering a lot of different use cases and scenarios. E.g. Swarm feels like it lacks a lot of stuff - I've subscribed to a lot of tickets for features that I wanted but that aren't yet there.
Just a personal opinion.