Hacker News new | past | comments | ask | show | jobs | submit login

Wow, this is a huge bummer. A lot of our infrastructure assumptions have been based around having several small GKE clusters.



I think this trend is not good overall, and people will eventually be very unhappy with it. I'd rather help you figure out how to use fewer clusters.


Our cloud service here at confluent is designed around giving customers their own infrastructure. A lot of the times, that means giving them their own k8s cluster. The management overhead there isn't the issue however.

The real issue comes into play when you try to make developer environments.

To give our developers any semblance of a "real production-like" workload, they need to work with an entire kubernetes cluster - maybe even a couple - to simulate what's happening in production.

This means at any given time, we have hundreds of GKE clusters because each developer needs a place to try things. Yes, these are ephemeral and can be tossed aside, and yes they cost a tiny bit in VM prices, but adding a per-cluster management fee is going to skyrocket this expense and push us towards trying to figure out ways to share these clusters between developers, which defeats the entire purpose of the project.

We'll have to seriously consider abandoning GKE for this use-case now and that sucks, because it's by far the fastest managed k8s solution we've found so far.


Try KIND. Much better devex.


ya we're using that in a few places, too actually.


Tim, would you be willing to elaborate on why you dislike the "many small clusters" pattern?


Many small clusters just do not deliver on a lot of the value of Kubernetes. Clusters are still hard boundaries to cross (working to fix that). Utilization and efficiency are capped. OpEx goes up quickly.

There are reasons to have multiple clusters, but I think the current trend takes that too far.

TO BE SURE - there's more work to do in k8s and in GKE.


As always, the insight is appreciate, Tim!


Actually, could you elaborate on the benefits of your approach? edit: I am asking because this is counter intuitive to anything I'd want to solve with K8. Specially when it comes as a managed service.


I'm sorry to hear that. You can use namespaces and and separate node pools to isolate workloads. We'd love to hear more about your use case for having many small GKE clusters.


Hey Seth, I know you used to work at Hashicorp on vault. I think Vault recommends that if you want to deploy it on Kubernetes, it should have the cluster to itself.


That's correct. Vault Enterprise (at my last math) was ~$125k/yr, so that management cost is negligible :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: