I beg to differ. The jump from learning Docker (and containers generally) to learning Kubernetes is not “hard”. Sure it’s a different paradigm of application deployment but I’ve seen far too many posts on HN that completely undermine its value in the name of difficulty.
You can use it if you’re not “at scale” completely fine and reap all the benefits as if you were.
Idk it’s because people hate Google, so they hate Kubernetes, whether they’re “get off my lawn” DevOps heads who want to maintain their complicated walled garden deployments they hand-rolled to maintain job security or what but it’s frankly embarrassing.
Using k8s to deploy is easy, setting up a cluster with the 'new' admin command is also straightforward...
Doing maintenance on the cluster isn't. Debugging routing issues with it isn't either, configuring a production worthy routing to begin with isn't easy either. it's only quick if you deploy weave-net and call it a day.
I would strongly discourage anyone using k8s in production unless it's hosted or you have a full team whose only responsibility is it's maintenance, provisioning and configuration
Very few people who suggest using kubernetes are suggesting using kubespray or kubeadm. 99% of companies will want to just pay for a managed kubernetes cluster which, for all intents and purposes, is basically AWS ECS with more features and less vendor lockin.
It should be also known that all "run your code on machines" platforms (like ECS) have similar issues. I remember using ECS pre-fargate and dealing with a lot of hardware issues with the instance types we were on. It was a huge time sink.
> it's only quick if you deploy weave-net and call it a day
That's exactly the benifit of kube. If something is a pain you can walk up to one of the big players and get an off-the-shelf solution that works and spend very little time integrating it into your deployment. No cloudformation stacks or other mess. Just send me some yaml and tell me some annotations to set on my existing deployments.
> I would strongly discourage anyone using k8s in production unless it's hosted or you have a full team whose only responsibility is it's maintenance, provisioning and configuration
If you have compute requirements at the scale where it makes sense for you to manage bare metal it should be pretty easy for you to find budget for 2 to 5 people to manage your fleet across all regions.
Hi. I run my production 7 figure ARR SaaS platform on google hosted k8s. I spend under 10 minutes a week on kubernetes. Basically give it an hour every few months. Otherwise it is just a super awesome super stable way for me to run a bunch of bin-packed docker images. I think it’s saved me tons of time and money over lambda or ECS.
It’s not F500 scale, but it’s over 100 CPU scale. Confident I have a ton of room to scale this.
If you end up making a blog post about how you do your deployments/monitoring and what it's enabled you to do I think it'd be a great contrast to the "kubernetes is complicated" sentiment on HN.
This sounds like fun. Kind of a “how to use Kubernetes simply without drowning”. Though would it just get downvoted for not following the hacker news meme train?
I have heard of people taking "years" to migrate to kube but only on HN and only on company who's timelines for "lets paint the walls of our buildings" stretch into the decades. But, even once you move, you get benefits that other systems don't have.
1. Off the shelf software (even from major vendors [0])
2. Hireable skill set: You can now find engineers who have used kubernetes. You can't find people who've already used your custom shell scripts.
3. Best practices for free: zero-downtime deploys can now be a real thing for you.
4. Build controllers/operators for your business concepts: rather than manually manage things make your software declaratively configured.
I might have misunderstood you but there is a huge difference between a developer being able to use docker and understand the basics of containerization and CI/CD, and a devops/ops person managing servers/clusters using docker swarm or kubernetes. The latter of the two is so far more difficult to master than the first.
Managing a kubernetes cluster has so many possibilities to shoot yourself in the foot without realizing it. There are dozens of tutorials online how to set up a simple linux/nignx/python/postgres cluster (including lots of results for common error google searches) while routing problems of your legacy php application that is behind an istio controlled ingress running on a specific kubernetes version will leave you for yourself.
Sure you won't be able to scale indefinitely. Switching a solid containerized project running on your self-managed machines to a kubernetes setup will be quite easy (if you heeded devops best practices).
In my experience, adopting Kubernetes is seldom a well informed decision weighting the pros and the cons. Usually it's a stampede effect of higher-ups pushing for Kubernetes, because everyone else is, without really understanding what it entails.
The truth is, Kubernetes is awesome, it brings many features to the table. But it also requires ~10% additional very expensive headcount, ~20% more tasks overall, and prolongs the release cycle by ~20%. Figures are from my experience. Those drawbacks are rarely ever discussed - it's just dumped onto existing teams on top of their existing responsibilities, leading to struggle and frustration.
Speaking from personal experience, I feel like you just pulled those numbers out of thin air.
At my job, we went from overly complex Elasticbeanstalk deployments to pushing out new releases via Helm charts into k8s...deployment time vastly improved as did cognative load on what was actually happening.
Elastic Beanstalk is a halfhearted attempt at reproducing Google App Engine or Heroku. It is not comparable.
GAE, on the other hand, is dreamy compared to K8s. I once moved some infrastructure to K8s because it was costing too much on GAE; I ended up moving it back because it was worth it. We've subsequently moved it to Digital Ocean's PaaS but that's a different story...
Beanstalk, while great when it came out, is not a great solution now. It was also never really meant for teams who run things at scale. It also got quite complicated because it just didn't expose a lot of knobs.
I think you'd find ECS or similar as easy to work with as k8s and all of them will be faster than beanstalk.
Beanstalk is ALWAYS purposefully slow, this is by design. It mimics how amazon deploys internally, slow and steady wins the race to safety. It also has some really bad issue if from from a bad deploy back to a broken app; eg you can wedge it pretty bad.
Anyway at this point I don't think Beanstalk is a fair thing to compare to. It's good you moved off.
To add to that, a step to use anything from Google is a step onto Google's infamous "deprecation treadmill". A rather frustrating lifestyle (unless you are inside Google and your code gets updated/maintained in the monorepo).
Go to any Kubernetes page and it's all heavyweight "nodes" and "containers" and "tasks" and "resources" (some of which seem to have very special meanings). It's not easy to get into.
I don't think this is some oversight on behalf of the technical writers. They don't lack the ability to explain it in simple terms, they're putting up a warning sign. If you want to join Kubernetes, it's going to mean your entire way of doing things will now be the Kubernetes way, it's not just going to be a few lines of code you add to a Make file.
A lot of people are wary of these heavyweight systems, because it's going to end up a fairly hard dependency.
I beg to differ. The jump from learning Docker (and containers generally) to learning Kubernetes is not “hard”.
Unless you are at a scale that you can employ a full-time Kubernetes team, you probably don't need Kubernetes, and if you insist on using it for production anyway, you absolutely should use one of the many managed offerings (DO is probably cheapest, I have no affiliation with them), or shrinkwrapped product like Tanzu.
Bootstrapping from scratch on bare metal remains non-trivial and an in-place upgrade is an order of magnitude harder.
You can use it if you’re not “at scale” completely fine and reap all the benefits as if you were.
Idk it’s because people hate Google, so they hate Kubernetes, whether they’re “get off my lawn” DevOps heads who want to maintain their complicated walled garden deployments they hand-rolled to maintain job security or what but it’s frankly embarrassing.