An overview of the different technologies in Linux virtualization.
The original x86 architecture is not virtualizable, because some instructions behave differently depending on whether or not the CPU is running in privileged mode. Because a guest virtual machine does not run in privileged mode (for obvious security reasons), pure software full virtualization software like vmware or qemu deals with these instructions by replacing them with other instructions on the fly.
This instruction rewriting can be quite expensive. If the guest operating system kernel would simply never call the unvirtualizable instructions, that overhead could be avoided. While we're changing the guest operating system anyway, why not stop pretending we are emulating real hardware, and give it lower overhead virtual devices instead?
In a nutshell, this is paravirtualization. In order to run more efficiently, the guest operating system's kernel is changed out for a kernel that behaves well in a virtualized environment.
Xen, lguest and User Mode Linux do paravirtualization on x86. IBM POWER also does something along the same lines, with the hypervisor taking care of pagetable updates.
Hardware assisted virtualization
Intel VT and AMD-V capable CPUs can run all instructions in an unprivileged virtual machine, and have them behave well. However, when running an unmodified operating system, many operations simply trap to the hypervisor and still need to be emulated. However, it allows for a much cleaner implementation of full virtualization.
Xen and KVM do hardware assisted full virtualization.
Coopvirt (cooperative virtualization) is an interesting hybrid between paravirtualization and hardware assisted full virtualization. The idea is to use the hardware capabilities of Intel VT and AMD-V to do some of the virtualization that is done in software by paravirtualization, while still having a well-behaved guest that can run very efficiently in a virtualized environment.
As of late 2006, coopvirt on x86 is still in a research and prototyping phase. However IBM mainframes have been using something along the lines of coopvirt for decades.
Containers, also known as operating-system level virtualization, do not run virtual machines at all, but simply segregate multiple user space environments from each other, while everything runs under one kernel. Container systems tend to have low overhead and high density, but also lower isolation between the different containers. Container systems do not allow you to run multiple different kernels, but different Linux distributions in the different containers are fine. While possibly limiting for testing or development, containers can simplify production usage since the shared kernel reduces the amount of software and security maintenance.
Solaris Zones, Linux-VServer and OpenVZ/Virtuozzo are examples of containering systems. OpenVZ has relatively complete resource isolation between the different containers, the other two have a bit less control. FreeBSD Jails can be seen as a precursor to containers.
Binary rewriting / JIT
Qemu and VMware both emulate a full computer without relying on trap & emulate. They achieve this by scanning the instructions that the guest is about to run, making sure there are no privileged instructions on the page and replacing those instructions with alternatives if necessary. Because there may be a debugger running inside the guest, unchanged code pages need to be marked read-only and changed code pages need to be duplicated. Qemu has the ability to emulate different architectures, eg. running an x86 virtual PC on powerpc hardware, while VMware is x86 only. VMware has some paravirtualized device drivers available for eg. video, to reduce the performance impact of full virtualization.
ParavirtBenefits other benefits of paravirtualization.
A comparison of hypervisor based vs. Linux based virtualization.
IBM DeveloperWorks article An overview of virtualization methods, architectures, and implementations