Hacker News new | past | comments | ask | show | jobs | submit login

Nitpick: it’s not accurate to say that a hypervisor, by definition, runs right on the hardware. Xen (as a type-1 hypervisor) has this property; KVM (as a type-2 hypervisor) does not. It’s important to remember that the single core responsibility of a hypervisor is to divide hardware resources and time between VMs, and this decision-making doesn’t require bare-metal.

For those unfamiliar, the informal distinction between type-1 and type-2 is that type-1 hypervisors are in direct control of the allocation of all resources of the physical computer, while type-2 hypervisors operate as some combination of being “part of” / “running on” a host operating system, which owns and allocates the resources. KVM (for example) gives privileged directions to the Linux kernel and its virtualization kernel module for how to manage VMs, and the kernel then schedules and allocates the appropriate system resources. Yes, the type-2 hypervisor needs kernel-mode primitives for managing VMs, and the kernel runs right on the hardware, but those primitives aren’t making management decisions for the division of hardware resources and time between VMs. The type-2 hypervisor is making those decisions, and the hypervisor is scheduled by the OS like any other user-mode process.




Type-1 and type-2 hypervisor is terminology that should at this point be relegated to the past.

It was never popularly used in a way accurate to the origin of the classification - in the original paper by Popek and Goldberg talked about formal proofs for the two types and they really have very little to do with how the terms began being used in the 90s and 00s. Things have changed a lot with computers since the 70s when the paper was written and the terminology was coined.

So, language evolves, and Type-1 and Type-2 came to mean something else in common usage. And this might have made sense to differentiate something like esx from vmware workstation in their capabilities, but it's lost that utility in trying to differentiate Xen from KVM for the overwhelming majority of use cases.

Why would I say it's useless in trying to differentiate, say, Xen and KVM? Couple of reasons:

1) There's no performance benefit to type-1 - a lot of performance sits on the device emulation side, and both are going to default to qemu there. Other parts are based heavily on CPU extensions, and Xen and KVM have equal access there. Both can pass through hardware, support sr-iov, etc., as well.

2) There's no overhead benefit in Xen - you still need a dom0 VM, which is going to arguably be even more overhead than a stripped down KVM setup. There's been work on dom0less Xen, but it's frankly in a rough state and the related drawbacks make it challenging to use in a production environment.

Neither term provides any real advantage or benefit in reasoning between modern hypervisors.


> Type-1 and type-2 hypervisor is terminology that should at this point be relegated to the past.

Maybe it's because of the time I grew up in, but in my mind the prototypical Type-I hypervisor is VMWare ESX Server; and the prototypical Type-II hypervisor is VMWare Workstation.

It should be noted that VMWare Workstation always required a kernel module (either on Windows or Linux) to run; so the core "hypervisor-y" bit runs in kernel mode either way. So what's the difference?

The key difference between those two, to me is: Is the thing at the bottom designed exclusively to run VMs, such that every other factor gives way? Or does the thing at the bottom have to "play nice" with random other processes?

The scheduler for ESX Server is written explicitly to schedule VMs. The scheduler for Workstation is the Windows scheduler. Under ESX, your VMs are the star of the show; under Workstation, your VMs are competing with the random updater from the printer driver.

Xen is like ESX Sever: VMs are the star of the show. KVM is like Workstation: VMs are "just" processes, and are competing with whatever random bash script was created at startup.

KVM gets loads of benefits from being in Linux; like, it had hypervisor swap from day one, and as soon as anyone implements something new (like say, NUMA balancing) for Linux, KVM gets it "for free". But it's not really for free, because the cost is that KVM has to make accommodations to all the other use cases out there.

> There's no performance benefit to type-1 - a lot of performance sits on the device emulation side, and both are going to default to qemu there.

Er, both KVM and Xen try to switch to paravirtualized interfaces as fast as possible, to minimize the emulation that QEMU has to do.


>Maybe it's because of the time I grew up in, but in my mind the prototypical Type-I hypervisor is VMWare ESX Server; and the prototypical Type-II hypervisor is VMWare Workstation.

My point is that these are largely appropriated terms - neither would fit the definitions of type 1 or type 2 from the early days when Popek and Goldberg were writing about them.

> Or does the thing at the bottom have to "play nice" with random other processes?

From this perspective, Xen doesn't count. You can have all sorts of issues from the dom0 side and competing with resources - you mention PV drivers later, and you can 100% run into issues with VMs because of how dom0 schedules blkback and netback when competing with other processes.

ESXi can also run plenty of unmodified linux binaries - go back in time 15 years and it's basically a fully featured OS. There's a lot running on it, too. Meanwhile, you can build a linux kernel with plenty of things switched off and a root filesystem with just the bare essentials for managing kvm and qemu that is even less useful for general purpose computing than esxi.

>Er, both KVM and Xen try to switch to paravirtualized interfaces as fast as possible, to minimize the emulation that QEMU has to do.

There are more things being emulated than there are PV drivers for, but this is a bit outside of my point.

For KVM, the vast majority of implementations are using qemu for managing their VirtIO devices as well - https://developer.ibm.com/articles/l-virtio/ - you'll notice that IBM even discusses these paravirtual drivers directly in context of "emulating" the device. Perhaps a better way to get the intent across here would be saying qemu handles the device model.

From a performance perspective, ideally you'd want to avoid PV here too and go with sr-iov devices or passthrough.


According to the actual paper that introduced the distinction, and adjusting for change of terminology in the last 50 years, a type-1 hypervisor runs in kernel space and a type-2 hypervisor runs in user space. x86 is not virtualizable by a type-2 hypervisor, except by software emulation of the processor.

What actually can change is the amount of work that the kernel-mode hypervisor leaves to a less privileged (user space) component.

For more detail see https://www.spinics.net/lists/kvm/msg150882.html



There's arguments in both directions for something like kvm. Wiki states it pretty well:

> The distinction between these two types is not always clear. For instance, KVM and bhyve are kernel modules[6] that effectively convert the host operating system to a type-1 hypervisor.[7] At the same time, since Linux distributions and FreeBSD are still general-purpose operating systems, with applications competing with each other for VM resources, KVM and bhyve can also be categorized as type-2 hypervisors.[8]

https://en.wikipedia.org/wiki/Hypervisor#Classification


Not really, calling KVM a type-1 is a misunderstanding of what the “bare-metal” distinction is referring to. The real difference between the two types is whether the hypervisor owns the hardware or not. In the case of a type-1, the hypervisor runs below the kernel and controls access to the hardware, even for the kernel. In type-2, the hypervisor runs on the kernel, which owns the hardware, and must go through the kernel to use hardware resources.


But that's not how that works. KVM is as "bare-metal" in access to the system as ESXi is. The hypervisor code runs in ring 0 in both cases.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: