>Maybe it's because of the time I grew up in, but in my mind the prototypical Type-I hypervisor is VMWare ESX Server; and the prototypical Type-II hypervisor is VMWare Workstation.
My point is that these are largely appropriated terms - neither would fit the definitions of type 1 or type 2 from the early days when Popek and Goldberg were writing about them.
> Or does the thing at the bottom have to "play nice" with random other processes?
From this perspective, Xen doesn't count. You can have all sorts of issues from the dom0 side and competing with resources - you mention PV drivers later, and you can 100% run into issues with VMs because of how dom0 schedules blkback and netback when competing with other processes.
ESXi can also run plenty of unmodified linux binaries - go back in time 15 years and it's basically a fully featured OS. There's a lot running on it, too. Meanwhile, you can build a linux kernel with plenty of things switched off and a root filesystem with just the bare essentials for managing kvm and qemu that is even less useful for general purpose computing than esxi.
>Er, both KVM and Xen try to switch to paravirtualized interfaces as fast as possible, to minimize the emulation that QEMU has to do.
There are more things being emulated than there are PV drivers for, but this is a bit outside of my point.
For KVM, the vast majority of implementations are using qemu for managing their VirtIO devices as well - https://developer.ibm.com/articles/l-virtio/ - you'll notice that IBM even discusses these paravirtual drivers directly in context of "emulating" the device. Perhaps a better way to get the intent across here would be saying qemu handles the device model.
From a performance perspective, ideally you'd want to avoid PV here too and go with sr-iov devices or passthrough.
My point is that these are largely appropriated terms - neither would fit the definitions of type 1 or type 2 from the early days when Popek and Goldberg were writing about them.
> Or does the thing at the bottom have to "play nice" with random other processes?
From this perspective, Xen doesn't count. You can have all sorts of issues from the dom0 side and competing with resources - you mention PV drivers later, and you can 100% run into issues with VMs because of how dom0 schedules blkback and netback when competing with other processes.
ESXi can also run plenty of unmodified linux binaries - go back in time 15 years and it's basically a fully featured OS. There's a lot running on it, too. Meanwhile, you can build a linux kernel with plenty of things switched off and a root filesystem with just the bare essentials for managing kvm and qemu that is even less useful for general purpose computing than esxi.
>Er, both KVM and Xen try to switch to paravirtualized interfaces as fast as possible, to minimize the emulation that QEMU has to do.
There are more things being emulated than there are PV drivers for, but this is a bit outside of my point.
For KVM, the vast majority of implementations are using qemu for managing their VirtIO devices as well - https://developer.ibm.com/articles/l-virtio/ - you'll notice that IBM even discusses these paravirtual drivers directly in context of "emulating" the device. Perhaps a better way to get the intent across here would be saying qemu handles the device model.
From a performance perspective, ideally you'd want to avoid PV here too and go with sr-iov devices or passthrough.