Hacker News new | past | comments | ask | show | jobs | submit login
Vcluster – Create fully functional virtual Kubernetes clusters (github.com/loft-sh)
35 points by kiyanwang on May 17, 2021 | hide | past | favorite | 19 comments



All problems in computer science can be solved with another level of indirection.


Everything old is new again; I remember when Borgproxy allowed you to schedule jobs to "whichever cluster had the least load" within a region, and I imagine this is implemented and works similarly.


vcluster maintainer here. Right now, it is more about multiple vcluster running on the same Kubernetes cluster which addresses the multi-tenancy issues of Kubernetes. However, a single vcluster with multiple Kubernetes clusters to schedule workloads on is a super interesting idea as well. Federated clusters was a big topic a few years ago but it is very hard to solve.


Oh my god, I'm so happy I could cry.

> Each vcluster runs inside a namespace of the underlying k8s cluster. . . It's cheaper than creating separate full-blown clusters and it offers better multi-tenancy and isolation than regular namespaces.

Kubernetes has invented a bunch of their own means of measuring & consuming resources, and everything under Kubernetes all runs inside one single namespace, hopefully carefully managed (by human operators who carefully make sure things are going ok, or else).

I keep feeling like I'm missing something, that I have to be wrong, because Linux has really good ways to divide workloads. Namespaces. You put different jobs or different classes of work into different namespaces. Weirdly Kubernetes is most unhelpful about doing this job, about assisting with this subdivision process.

This is the first project I've seen in Kubernetes land that explicitly names the clear, obvious, wonderful step of using multiple Linux namespaces to divide up work, as a way to more reasonably achieve multi-tenancy & separation of workloads. The first Kubernetes project I've heard that lets the Linux kernel do what it's good at.

I'm not sure what the use is for corporations or enterprises. I think dividing up resources amid different teams is a reasonable use. But I am incredibly excited for hyper-converged home-labs & other over-growing diy computer-science projects to start having a good way to host a variety of tenants in a reasonably controlled, semi-standard way. Introducing multiple namespaces is something Kubernetes has desperately needed. I am head over heels in love with what's happening here. I hope the implementation doesn't get too much wrong, because Kubernetes has ignored the best way to divide resources up on a computer, has done too much on it's own, within one namespace, and starting to expand across multiple namespaces is a game changer, opens the field for how useful and how interesting Kubernetes is.


It looks like vcluster is referring to K8S namespaces which have existed forever, not Linux namespaces which it seems you are referring to


I think you are right. Blast it. Thanks!


Not really on-topic but does anyone have a "fake" Kubernetes? More than just an API server that serves canned responses, but something that actually runs the control loops but doesn't run any containers. For example, I could configure it in advance like:

   containers:
     image: foo/bar:1.2.3
     produce_logs: "hello this is fake foo/bar"
Then kubectl apply a manifest like:

   apiVersion: apps/v1
   kind: Deployment
   metadata:
     name: foobar
   spec:
     replicas: 1
     template:
       spec:
         containers:
            - name: foobar
              image: foo/bar:1.2.3
Then run a command like "kubectl get pods":

    NAME                      READY   STATUS    RESTARTS   AGE
    foobar-abcd123456-f0ob4   2/2     Running   0          1s
Or even "kubectl get logs foobar-abcd123456-f0ob4":

    hello this is fake foo/bar!
Basically, this would let you test things that interact with kubernetes, without having to have a cluster. This thing would just be a library that you link against your tests, or a program that you start up next to your tests, and it gets you most of the way there. It's better than a fake apiserver you write yourself, where you "know" that the service controller looks at pods that match a selector, then produces endpoints objects that contain that IP address; you could write this kind of low-overhead test without having to know those things (or get it wrong, and code your app to a k8s spec that doesn't exist except in your own mind).

That's what I'd really like.


Not quite fake: but this might be close enough? https://github.com/kubernetes-sigs/kind

Alternatively you could create a CRI plugin that does what you want. https://github.com/kubernetes/kubernetes/blob/242a97307b3407...


Yeah, kind is a good compromise. It's more than I want and it does take some time to startup, but it's pretty lightweight and very accurate. Because everything is running locally you can use tools like `perf` to keep an eye on things you're running on k8s, right from your usual workstation. I like it a lot.

The CRI plugin sounds like probably the right thing to do. Everything is real up until actually running stuff.


You should be able to get close to that with "real" Kubernetes, by only running etcd, kube-apiserver, and kube-controller-manager (without any kubelets, or kube-proxies), which should get you most of the way there (all Pods will just get stuck Pending, but most other stuff should work). Of course, that means that you're still running etcd and managing multiple servers, but at least you don't have to give it any global system access.

If you're testing something that depends on pod lifecycles then you should be able to get most of the way by updating the status yourself from your tests.

Fake logs may be a bit trickier, but I'd be fairly wary of anything trying to machine-parse logs anyway (unless you're testing a log aggregator, I suppose..).


You'd just then need to implement a fake kubelet that pretends it ran something and produced the fake logs.


You can also achieve the above very easily with kubeadm which is the upstream kubernetes bootstrapper. All it takes:

kubeadm init


You could deploy the vcluster pod without the syncer. That would give you a k3s cluster without a scheduler. You can create deployments, pods, services etc. but no containers would actually be started. Of course, that means you still need one host cluster though for running the half-baked vclusters on.


It's easy enough to mock the images in there to do this via a real container, e.g.:

    containers:
      image: busybox
      args: ["echo", "hello this is fake foo/bar"]
and run this on kind


Kubernetes actually ships with a pretty decent "Fake" infrastructure, but I'll admit (having used it) pretending at control loops is still a PITA.

https://pkg.go.dev/k8s.io/client-go/kubernetes/fake


As others mentioned, just use kind to create a real cluster on demand whenever you need one one.

kind create cluster

30 seconds later you will have a fresh cluster to do whatever you want.

kind can even do multi-node HA clusters complete with a haproxy container for load balancing the kube-apiserver. Tearing things down is super simple too via "kind delete cluster"


Awesome!

Seems neat to spin up temporary lightweight clusters for E2E tests in a CI/CD pipeline, instead of a KinD cluster which can have very low performances.


Ephemeral clusters for dev and CI/CD is the main use case right now but due to the fact that the pods don't take a performance hit - as you already mentioned - because they are scheduled in the underlying cluster, vclusters can also be a very interesting concept for production workloads, effectively sharding your k8s cluster into logical clusters / control planes.


the only problem with this is needing to push your container images in CI before being able to schedule, as the real pod will a real node, which will need to pull a container.

In kind you can build container images into the single host, and avoid a container push/pull cycle. Any solns to that in vcluster?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: