Can anybody tell me what's the matter with Xen? I believe that a few years ago there was a lot of talking about Xen. Then all of that seemed to die out, and it seemed that the kernel developers instead developed KVM because it's simpler than Xen, and so the Xen developers had to re-release patches for every kernel release. What's the current status of Xen? How widely is it used? How widely is KVM used? Why is it being integrated into Linux now even though KVM already exists?
Xen is a microkernel, loosely based on the Nemesis OS from the University of Cambridge. Instead of running traditional POSIX software, this microkernel runs software in ring 1 using a very special system call API. DomU and Dom0 support for Linux means that Linux has been ported to run on top of this special system call API.
But this doesn't mean Xen is now in Linux. Xen is still a separate OS with it's own memory manager, scheduler, device drivers, etc.
OTOH, KVM is a device driver for Linux that let's you run virtual machines under Linux. You're still using the Linux scheduler, memory manager, device drivers, etc. KVM is fundamentally a part of Linux.
For another point of reference (with less confusing terminology), L4 is pretty much the same thing as Xen and L4Linux is equivalent to Linux dom0/domU support.
It is not as efficient as a system-level one like Xen or KVM.
i.e. Vbox is an application that runs at the same level as say... apache or nginx as opposed to KVM and Xen which work at a lower level by integrating closely with the Linux kernel.
That means that the resources allocated to Vbox are decided upon by the OS, with all the overhead that comes along.
Xen and KVM can run a guest OS with less overhead than Vbox and can decide that a guest OS has 25% of CPU cycles while another has only 10%, which vbox cannot do.
VirtualBox works fine for development environments, but you want something like Xen or KVM for running your servers if not going bare metal.
VirtualBox is basically the same as KVM with the exceptions of 1) their kernel modules are not upstream 2) they put a lot of effort into doing dynamic translation for non-VT/SVM hardware 3) their performance is really poor 4) they don't integrate very well with Linux mainly because of (1).
As far as being a hypervisor however, Vbox is strictly a type 2, i.e. it needs a host OS to run and resources are allocated to it.
In contrast, Xen and KVM are type 1 hypervisors, meaning they themselves allocate resources for guest OSes from bare metal.
There are debates about the semantics of whether KVM is type 1 or type 2 due to its implementation on top of the Linux kernel, but it is definitely in a different league than Vbox.
The problem is that the phrase used in Popek's paper is "Conventional OS". This was the 1970s. What's now considered a "Conventional OS" was not conventional back then.
For instance, everyone would agree Linux is a conventional OS but Linux provides a kernel module (KVM) that allows for virtualization driven by userspace.
A few years ago the Xen developers sent a massive patch to the Linux kernel developers and it got rejected outright because it got far too intimate with way more of the kernel internals than it should have.
Things seem to have changed now, though -- it looks like the Xen developers have cleaned up their code and the Linux devs were willing to merge it into mainline.
Xen is still widely used. IIRC Xen provides the virtualization layer for AWS, and it is used by some pretty large hosting providers (Linode comes to mind). It also is packaged into a number of commercial commercial offerings. Oracle's VM solution is really just Xen running on Red Hat with some optimizations for their platform stack, same with Citrix. Clearly those two implementations alone is going to be a decently-sized install base.
I don't know how much KVM is used in the wild, but it has been crowned the "official" hypervisor for RHEL and Ubuntu, so I would guess that it it's been steadily gaining steam w/ the OSS crowd.
Indeed -- and stuck with it for quite some time. They were trialling Xen for some time, but I don't think they ever deployed it on a terribly large scale. Certainly, my VM went straight from UML to KVM.
I believe the management tools for Xen (or lack thereof) were the reason for using KVM instead. None of my machines (I have 7 or 8) have ever used Xen.
Xen is used all over, AWS and a multitude of linux vps hosting are some of the major ones. I don't know how much kvm is used in comparison but I believe that Xen's paravirtualization offers some advantages over using kvm (someone with more knowledge could probably chime in for that).
As mentioned by others, xen is used a lot in the wild (I believe it is the underlying virtualization architecture on EC2, for example).
One big advantage of KVM is that you can use your existing kernel, and there is little chance of conflict with other features in the kernel (which can include somehow esoteric drivers, more fundametal patches like low-latency stuff, new schedulers, etc...). At my current employment, we had issues with xen and raid drivers for example which did not work with the xen kernel (but that may be our own incompetence).
When KVM was first out, it required support from the CPU, whereas xen did not. KVM still requires this CPU support, but it is very hard to buy a x86 CPU without the feature nowadays, whereas it was not so common 3-4 years ago. Anything you would use in a server certainly has the support.
KVM is simpler to set up on a common distribution (Ubuntu, RHEL, Fedora), which means it is easier to reproduce the same environment as used in production.
Xen solved a problem before 2007: no CPUs at that time had hardware virtualization extensions, and Xen allowed you to virtualize Linux-on-Linux without needing them.
However, the Xen developers never worked very hard on getting the code into the mainstream kernel, so literally for two or three years if you wanted Xen you had to run some ancient patched Linux kernel (2.6.18 IIRC).
From around 2007, CPUs started to be shipped that contained hardware virt. KVM was written about that time, and because it just defers virtualization to the hardware, the KVM code is small and simple, more akin to a device driver than anything else. Putting this in Linux was a no-brainer really.
Now, 2011, third generation hardware virt is everywhere, and it's really fast, much faster in fact than the original Xen "paravirtualization" trickery.
The Xen developers realized late in the game that they needed to get their changes upstream into Linux. The changes were late and very invasive, but now after ~3 years of trying, all the pieces are finally in upstream Linux, and that's the news you're reading about today.
Hats off to the Xen developers like Jeremy Fitzhardinge and others that persevered despite what was quite stiff resistance from the Linux community initially! Just to clarify, the major stumbling block was the Dom0 - Xen's privileged domain which has normally been a fairly heavily customized Linux kernel. A number of the DomU/guest components were already in the mainline for a while. Then the core Dom0 components went in around 2.6.37. And now all the PV backend drivers etc. are now in as well, paving the way for a vanilla Linux kernel to be a fully functional dom0 under Xen.
Having watched the back-and-forth between the Xen and Linux community over this inclusion business a bit, I feel the delay has actually been worth it in hindsight. Linux's pvops abstraction is clean and impartial. With this Linux has support for at least 3 virtualization technologies out of the box: KVM, Xen, and lguest. Also, a lot of the jagged edges in the Xen dom0 patches have been smoothed in this process.
This is also great news for private and public Xen-based deployments as well as companies such as Oracle, Suse and others that built their own virtualization platforms on top of Xen. Trying to get supported, well-tested drivers for new hardware working with Xen has often been a major pain point and required significant engineering resources. This was a major reason why Ubuntu switched to KVM officially (Red Hat led the charge here but that was mostly because they bought Qumranet - the company that built KVM). With this Xen is significantly more maintainable and I expect to see at least some of the distros to re-include xen as a virtualization option fairly soon!
Red Hat used to not care about Xen, and now they still don't care. But as long as they don't deliberately break it, I guess RHEL 7 will inherit Xen support for free from upstream Linux.
Even in RHEL 6 we have support for RHEL as a Xen guest.
However as aliguori says above, "Xen in Linux" is a misnomer: Xen is a separate and completely different kernel (the "Xen hypervisor") and it seems unlikely we'll be shipping that any time soon. No one gets rich by providing paid support for two completely different and incompatible kernels.
XenServer and the xen.org stuff are very similar from the perspective of a guest. I mean, administering them is a little different, but not that different. there is rather a lot of shared code between the two systems; Xen.org generally gets newer features first, and xenserver has for-pay support.
Recently (well, I guess not that recently) the 'xl' command line from xenserver was given back to xen.org. it's a lot nicer than 'xm' I think.