This table compares the features and performance of the various virtualization technologies available for Linux. Hopefully this table also explains why many Linux distributions today ship Xen, even though UML,lguest and KVM are upstream.
For an explanation of the technologies, please see the technology overview page.
If you spot something that is not up to date, or think of something missing, feel free to update this page.
|
full virt |
paravirt |
containers (OS virt) |
license |
architectures |
performance |
SMP guests |
CPU / memory hotplug |
standalone host |
notes |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
GPL |
i686, x86-64, IA64, PPC |
paravirt very fast, full virt medium |
|
|
|
full virt needs VT / AMD-V |
|
|
|
|
GPL |
i686, x86-64, IA64, PPC, S390 |
paravirt very fast, full virt medium |
|
|
|
full and para virt need VT / AMD-V, upstream |
|
|
|
|
GPL |
i686 |
slow/medium |
|
|
|
upstream |
|
rhype |
|
|
|
GPL |
i686, x86-64, PPC |
fast |
? |
|
|
research project |
MoL |
|
|
|
GPL |
PPC |
fast |
|
|
|
32 bit only |
|
|
|
GPL |
i686, x86-64, PPC |
slow |
|
|
|
upstream |
|
|
|
|
GPL |
i686, ARM |
medium |
|
|
|
|
|
qemu |
|
|
|
GPL |
i686, x86-64, IA64, PPC, ARM, MIPS, SPARC (kQEMU only i686/x86-64) |
slow, medium with kQEMU |
|
|
|
|
|
|
|
GPL |
i686, x86-64, IA64, PPC, SPARC |
native |
n/a (7) |
n/a (8) |
|
live migration |
|
|
|
|
GPL |
all where linux goes |
native |
n/a (7) |
n/a (7) |
|
poor performance isolation |
|
|
|
|
GPL |
all where linux goes |
native |
n/a (7) |
n/a (7) |
|
upstream since 2.6.29 |
|
|
|
|
GPL/proprietary |
i686, x86-64 |
fast/very fast |
|
|
|
kernel module GPL, RDP and USB support proprietary |
|
VMware Server |
|
|
|
proprietary |
i686, x86-64 |
medium/fast |
|
|
|
needs proprietary kernel modules |
VMware Workstation/Player |
|
|
|
proprietary |
i686, x86-64 |
medium/fast |
|
|
|
needs proprietary kernel modules |
VMware ESX |
|
|
|
proprietary |
i686, x86-64 |
fast/very fast |
|
|
|
|
LPAR |
|
|
|
proprietary |
s390 |
native |
|
|
|
|
z/VM |
|
|
|
proprietary |
s390 |
very fast |
|
|
|
typically runs under LPAR |
PHYP |
|
|
|
proprietary |
PPC |
fast |
|
|
|
used on all modern IBM System p |
lv1 |
|
|
|
proprietary |
PPC |
fast |
|
|
|
used on Sony PS3 |
BEAT |
|
|
|
proprietary |
PPC |
fast |
? |
|
|
used on Toshiba CellEB |
Notes:
- Paravirtualization is fundamentally faster than full virtualization, with the exception of the userspace implementation in UML.
- Containers (OS-level virtualization) is yet faster than paravirtualization, achieving the native speed.
- Performance can vary wildly depending on workload. This page assumes system call intensive applications, since "fair weather" performance numbers are not very useful.
- Memory and CPU hotplug is mostly useful because it allows one to run more virtual machines on a system simultaneously, adjusting the amount of memory allocated to each guest depending on load.
For an overview of the other benefits of paravirtualization, see ParavirtBenefits.
Full virtualization performance in KVM and Xen is largely limited by the overhead of trap & emulate. Emulating multiple instructions at once at the time of a trap should bring it up to speed with VMware.
- Containers (OpenVZ / Virtuozzo, Linux-VServer, LXC) are not virtualization technologies per se. They carve up a single system in "super chroot" jails. All the "guest" processes in the containers run directly on the same "host" kernel and as such, generally have access to the same cpu/ram/etc resources as the host. For example the contained processes may be 64 bit and use multiple cpus, if the host is 64 bit and has multiple cpus. Resource limits are generally imposed the same way as ordinary linux processes, such as with the "nice" command etc. Emulation is generally the same as whatever the host kernel may natively/naturally be capable of, such as, a 64 bit x86_64 kernel can execute 32 bit i386 binaries, or if the linux-abi modules are loaded, it may be able to execute SCO Unix binaries, because it already could, not because of anything to do with the container system.
- OpenVZ (Virtuozzo) can change memory and CPU quota during runtime, there is no real hotplug since there are no guest kernels.
- Qemu can emulate different guest architectures, eg. running an x86 virtual machine on a PPC guest. Qemu also has the distinction of being the only full virtualization technology that can run without root privileges.
- Parts of Qemu are used in the full virtualization implementations of Xen and KVM.
The standalone host column indicates whether or not the hypervisor (or host OS, in the case of VM) is booted before Linux. See the comparison of hypervisor based vs. Linux based virtualization for the debate on whether or not this is an advantage.