The following lhype quickstart recipe was originally written by James Morris. It was turned into a wiki page so it can easily be updated if lhype changes in the future.
Note: this is becoming outdated and will soon be removed or rewritten: see the official web site.
First, you need to get a copy of the kernel sources with lhype in it. Either grab the appropriate patches from Rusty's paravirt patches repository or clone the (out of date but stable) git repository like this:
$ wget http://fabrice.bellard.free.fr/qemu/linux-test-0.5.1.tar.gz $ tar -xzf linux-test-0.5.1.tar.gz linux-test/linux.img
$ sudo /sbin/modprobe lhype $ sudo linux-2.6-lhype/drivers/lhype/lhype_add 32M 1 linux-2.6-lhype/vmlinux \ linux-test/linux.img netfile root=/dev/lhba
Linux version 2.6.19-rc5-mm2-gf808425d (email@example.com) (gcc version 4.1.1 20060928 (Red Hat 4.1.1-28)) #1 PREEMPT Tue Nov 28 00:53:39 EST 2006 QEMU Linux test distribution (based on Redhat 9) Type 'exit' to halt the system sh: no job control in this shell sh-2.05b#
'lhype_add' is an app included with the kernel which launches and monitors guest domains. It's actually a simple ELF loader, which maps the guest kernel image into the host's memory, then opens /proc/lhype and writes some config info about the guest. This kicks the hypervisor into action to initialize and launch the guest, while the open procfile fd is used for control, console i/o, and DMA-like i/o via shared memory (using ideas from Rusty's earlier XenShare work). The hypervisor is simply a loadable kernel module. Cool stuff.
It's a little different to Xen, in that the host domain (dom0) is simply a normal kernel running in ring 0 with userspace in ring 3. The hypervisor is a small ELF object loaded into the top of memory (when the lhype module is loaded), which contains some simple domain switching code, interrupt handlers, a few low-level objects which need to be virtualized, and finally an array of structs to maintain information for each guest domain (drivers/lhype/hypervisor.S).
The hypervisor runs in ring 0, with the guest domains running as host domain tasks in ring 1, trapping into the hypervisor for virtualized operations via paravirt ops hooks (arch/i386/kernel/lhype.c) and subsequent hypercalls (drivers/lhype/hypercalls.c). Thus, the hypervisor and host kernel run in the same ring, rather than, say, the hypervisor in ring 0 with the host kernel in ring 1, as is the case with Xen. The advantage for lhype is simplicity: the hypervisor can be kept extremely small and simple, because it only needs to handle tasks related solely to virtualization. It's just 463 lines of assembler, with comments. Of course, from an isolation point of view the host kernel is effectively part of the hypervisor, because they share the same hardware privilege level. It has also been noted that in practice, a typical dom0 has so much privileged access to the hypervisor that it's not necessarily meaningful to run them in separate rings. Probably a good beer @ OLS discussion topic.
Note that Rusty will be giving a presumably canonical talk on lhype at LCA 2007.