Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is simply not true, maintaining a k3s(k8s has a few gotchas) cluster is very easy especially with k3s auto upgrade as long as you have proper eviction rules (maybe pod distruption). Ceph can be tricky, but you can always opt for lemon or longhorn which are nearly zero maintenance.

There's thousands of helm charts available that allow you to deploy even the most complicated databases within a minute.

Deploying your own service is also very easy as long as you use one of the popular helm templates.

Helm is by no means perfect, but it's great when you set it up the way you want. For example I have full code completion for values.yaml by simply having "deployment" charts which bundle the application database(s) and application itself into a single helm chart.

You can't just "jump into" kubernetes like you can with many serverless platforms, but spending a week banging your head to possibly save hundreds of thousands in a real production environment is a no-brainer to me.



But this also isn't true. Everything k8s has caveats. You can't deploy a Percona MySQL DB on ARM for instance. Various operators have various issues which require manual intervention. It's all pretty much a clusterfuck. Then debugging reasons why this service works locally with a systemd service but not on k8s is also time intensive and difficult. And the steadily changing features and frequent version bumps. It's a full time job. And many helm charts aren't that great to begin with.

And what when someone deployment hangs and can't be deleted but still allocates resources? This is a common issue.


I use bitnami helm charts exclusively and they have never failed me. ARM is a caviant in itself, but I had a really good experience with k3s.


Per the K3s web site and their docs, they don’t call out that it’s good for bare metal production. That tells me it’s not built for heavy workloads, and is instead for dev environments and edge compute.


The default k3s configuration is not scalable which is especially bad for etcd.

Etcd can be replaced with postgres which solves majority of issues, but does require self-management. I've seen k3s clusters with 94 nodes chug away just fine in the default configuration thought.


I think all the caveats and choices in your opening sentence rather undercut the idea that the parent comment's point "simply" isn't true...


My point being is that you can choose to overcomplicate it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: