Hacker News new | past | comments | ask | show | jobs | submit login
Virtlet: run VMs as Kubernetes pods (mirantis.com)
179 points by ivan4th on May 7, 2018 | hide | past | favorite | 23 comments



The real reason for these projects (Virlet, KubeVirt, RancherVM) is that OpenStack is too damn hard. Even with all this focus on containers these days people still need an onprem VM solution. But the requirements are really pretty simple for most use cases and the cost and complexity of OpenStack is not justified.

Edit: disclaimer: my company does RancherVM.


The reason for this project was that people want to migrate to containers as industry is pushing them forward to it, but - there will always be cases in which "normal" containers will be out of scope - when you will be dependent on piece which is: a) based on other than linux based operating system b) you need a specific linux module which for some reason should not be loaded on host kernel c) you need a hardware separation using virtualization because of security reasons All that can be achieved using "somewhere around" openstack, but what about integration/common interface with other parts of stack? That's the reason why we did Virtlet - to have the same interface for "normal" (docker image based) pods and VMs (hard disk image based), with the same kubectl command, with the same api (so it can be used for deployments, daemonsets, statefulsets and so on), with the same thing configuring networking in whole cluster (CNI, hopefully any of your choice - if it's not working, please fill the bug on GH).


I don't fully buy that explanation. If OpenStack was better they would have already had success on that platform and then would be asking for a bridge between k8s and OpenStack (openstack could easily be a virtual kubelet). Instead people are looking to abandon OpenStack in favor of something hopefully better.


I'm not saying that OpenStack is better/worse - it was only out of our goals. For sure OpenStack has better tenancy support, much more mature approach to networking or storage (in containers world we are still working on https://github.com/container-storage-interface/spec ) but in the same time - we only needed to have a qcow2 based vm running between already deployed pods, with the same user interface. That's all, without other story around that.


Lars from Fuga Cloud here.

We run a public cloud largely based on OpenStack components. While OpenStack can be daunting, using a configuration manager like Ansible makes any kind of deployments a lot simpler and reusable. We use Ansible internally to deploy Kubernetes to deploy new versions of OpenStack.

Tutorial[0] on how to use Ansible in combination with OpenStack.

[0]: https://fuga.cloud/academy/tutorials/deploying-owncloud-on-f...


Interesting to see how this really compares to KubeVirt, which seems to be doing the same thing. I don't think KubeVirt is "just" pets as far as I understand. [disclaimer: I've been very peripherally involved with KubeVirt because they asked me about integrating virt-v2v support].


You can't have a StatefulSet of VMs on KubeVirt, although you may want one if you're using unikernels or want to run a nested Kubernetes cluster for testing, like here: https://github.com/Mirantis/virtlet/blob/master/examples/k8s... On the other hand, you can't have VM migrations with Virtlet.


Yes, right now we were more focusing on the 1:1 case. Our OfflineVirtualMachine is basically a StatefulSet with replica 1. For stateless scaling we have a VirtualMachineReplicaSet. A StatefulVirtualMachineSet will probably be added soon to KubeVirt.


Be nice if there was just one "virt for Kube" technology. I guess virt-v2v will end up having to support both.


Given that the container ecosystem is a huge land rush with many obviously missing pieces, there are going to be multiple projects for every piece.


It seems that KubeVirt is implemented as an operator and registers new resource types, which is an approach I like.

I'd have for someone to do a kubectl delete pod --all thinking they are only deleting stateless apps and getting rid of VMs too.


As an aside - I saw you a few years ago in Gloucester UK and you taught me loads about virtualisation. Thank you Richard.


I'm using kubevirt for this. My usecase is to allow preconfigured windows VMs to boot up inside of kubernetes - so that both Windows VMs and Linux containers are manipulated using the same API. It works very well!


the key is that with Virtlet you can use you windows VM as other pods in replicasets, daemonsets, statefulsets, what can not be done using CRD offered by kubevirt. In the same time, assuming that your windows image supports cloud init (e.g. using http://cloudbase-init.readthedocs.io ) - you can setup it during the boot, configuring users, passwords, running arbitrary scripts - you have full control about what will be done during cloud init phase using only pod annotations (you don't need prepared specific image with predefined content as in KubeVirt case). In case of Virtlet you have a possibility to merge data from different sources (config maps, secrets, annotations) and use the result in output cloud init image.

At the end - KubeVirt provides you a way to run a VM as a custom resource in an app on k8s (virt-launcher), while Virtlet provides a way to run VM as first class cluster citizen, as a pod, with the same interfaces as for other pods (so you can use kubectl logs, kubectl attach -it, as with other "normal" (docker image based) pods).

ok, with windows vm you will see in kubectl logs only cloud init logs as thats the only part which is using serial console to output something :P

with other types of vm images (based on linux, bsd or unikernels) you would probably appreciate more possibility to use kubectl attach -it, or kubectl logs ;)


btw. both solutions have they strong points - virtlet is a bit closer to k8s interfaces, while kubevirt provides closer to libvirt options (e.g. TAL on https://twitter.com/dummdida/status/992037913352392705 )


Regarding CRDs and core types like statefulsets: We could reuse them with CRDs too. We explicitly decided against it to give the user and the internal components the chance to work with proper abstractions. We for instance have a real dedicated REST-API, fully integrated into k8s, including websocket endpoints for console, vnc, ...: https://www.kubevirt.io/api-reference/master/operations.html. Regarding to networking integration and cluster administration our vms are transparent to the cluster to ensure seamless integration on the pod level.

We don't have every controller type available in KubeVirt, but we have for instance a VirtualMachineReplicaSet and an OfflineVirtualMachine (soon renamed to StatefulVirtualMachine): https://www.kubevirt.io/user-guide/#/workloads/controllers/R.... A VirtualMachineStatefulSet will definitely be added. Others will be added on demand.

Regarding disk: I am not sure why you think that there are special configurations necessary on the images to run a vm on KubeVirt. Maybe you can explain that in more detail. It is one of the core use-cases that you simply start your already existing vm without modifications on the images in KubeVirt. We support different volume types, including a RegistryDisk. See https://www.kubevirt.io/user-guide/#/workloads/virtual-machi... for further details.

And of course we support cloud-init: https://www.kubevirt.io/user-guide/#/workloads/virtual-machi....

It is true that we do not integrate as far with "kubectl" like virtlet does. We have our own "virtctl" tool which provides virtualization related commands (e.g. "virtctl console", "virtctl vnc", ...). Some of them replacing a missing kubectl pieces, some of them adding virtualization specific commands. Having the deep integration is indeed nice. I just want to add that solely depending on "kubectl" is a two-edged sword. For some vms things work, for others not. There is a good chance that people have to modify their images to properly integrate with "kubectl" in the mentioned ways. That is exactly what we did not want.

Personally I think that there are differences between pods and virtual machines (arguably varying depending on what you do with your workloads). Integrating where things are equal and extending otherwise is a good thing. It allows very natural interactions.

I would be careful with saying that one or the other project allows running vms more as "first class citizen" in the cluster than the other. We could probably argue about that forever.


Yes, you can use KubeVirt for Windows. In some cases, it may be even better than Virtlet for that purpose. But if you want a real Kubernetes Deployment / StatefulSet / DaemonSet / etc. built out of VMs (examples being unikernels, nested Kubernetes cluster for testing etc.) Virtlet may be a better choice.


True - Today it does not directly fit into existing workload controllers (Deployment, ReplicaSet etc) but SIG Api Machinery actually saw that others would also like to see CRD support in templates of controllers.

If this lands, then KubeVirt would get that support "for free" (tm)


Regarding the nested cluster: One of the main reasons for having the VirtualMachineReplicaSet in KubeVirt is to allow that use-case. They work pretty nice in combination with CloudInit and e.g. kubeadm to dynamically scale the cluster.


What's the key difference between Virtlet/KubeVirt and just running VMs alongside your Kubernetes pods? Is the main goal to centralize management? That seems reasonable, I'm just wondering.

Also, how does a Kubernetes-managed VM compare to a plain VM from, say, AWS EC2? I imagine it's a little less efficient, since the situation I'm imagining involves various VMs running a Kubernetes pod running the VM, but I may have this all wrong.


Besides using kubectl to manage VMs, you also get VMs that join the cluster network as first-class citizens. In this blog post you can see a k8s service pointing to a VM and accessing k8s services from within the VMs. Also, compared to EC2 / GCE / etc., you can use Virtlet for bare metal on-prem clusters, too.


That makes sense, thanks.


reminds me of vmlets term from the virtual virtual machines https://pages.lip6.fr/vvm/publications/0008SBAC.pdf




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: