Why should I care about virtualization?
OK, so virtualization is the latest hype. Why should I care? Computers are getting cheaper every day...
This is a fair and often heard question.
On the other side, you'll find virtualization fanatics, who run 4 virtual machines on their desktop and feel the same about virtualization as they feel about the color monitor or the sound card years ago - they can never go back to computing without it, but they can't quite explain why somebody else should care about virtualization.
This article aims to describe why some people want virtualization, and why some other people absolutely need virtualization.
The most common use for virtualization is consolidation: combining multiple workloads on one physical computer. This allows people to run a lot of virtual machines on fewer physical computers.
But computers are cheap...
Well, computers may be cheap, but if you have enough of them the cost sure adds up.
A typical (full) data center
Lets take a look at a typical data center today. The data center is full, there is literally no more space to add extra computers and the power and air conditioning are near their limits too. However, since dual core CPUs and 1GB RAM sticks are so cheap, the vast majority of the servers only get used to 10-20% of their capacity.
In short, the data center is full, but the servers are empty.
If the IT department wants to run more server workloads, will they:
- Build a new data center, or
- use the capacity inside the existing servers?
As long as there is free space available inside a data center, adding a few computers is the easy way to go. However, once the data center is full, IT management will have a hard time convincing the higher-ups that an entire new data center should be built.
To make matters worse, there might not even be enough electricity available nearby. It's not like you can just plug a 5MW data center into the power grid, and you do not want to have to train an entire new staff in another state!
Virtualization can offer a relatively easy way out. When the datacenter fills up, you can start actually filling up the servers.
Hardware is getting faster every month. However, moving a workload to a new server requires installing an OS on the new server and then configuring it to run the application. After that, you copy over the application data and hope everything still works.
Virtual machines do not have this issue, since they do not interact with real hardware. You install the host OS onto your new server, then copy over the virtual machine in its entirety. No need to reconfigure the OS that runs your applications, since that is inside a virtual machine.
Legacy operating systems
The problem gets a lot worse when dealing with an older operating system. Yes, the one that runs that critical database. With a bit of bad luck, that older OS will not boot on quad core CPUs. Doh! Look around in any data center, and you'll find a critical application that's tied down to old hardware because it's running on a legacy operating system.
Wouldn't it be nice if you could magically run that old operating system on new hardware?
With full virtualization (vmware binary rewriting, or Xen or KVM with Intel VT or AMD-V capable CPUs) you can. It works because the virtualization layer emulates simple hardware, so your octo-core CPU will look like an older 8-CPU system, only with faster CPUs. Multi-core, ACPI device discovery and interrupt routing, support for 10GigE or SATA are no longer a problem.
Yes, virtualization can have significant overhead. However, because the virtualization software emulates simple hardware, it may help you run legacy OSes on way faster hardware than anything that OS could boot on natively. It may help with the power bill, too...
Whether you are a student doing software development, or the CIO of a major bank, you will have a shortage of test hardware.
Virtualization allows you to create low priority virtual machines for testing. Test out that new Fedora Rawhide or Debian Unstable on a virtual machine, before it breaks your desktop. Give your developers a bunch of virtual test machines each, instead of having them wait for each other to finish using the test systems.
With virtual machine migration - like vmware vmotion, or xen live migration - you can move a virtual machine from one system to another while it is running. Believe it or not, this is useful for more things than impressing your friends...
There are a number of situations in which you will want to migrate virtual machines to other physical machines:
- Hardware failure. Say that a CPU fan breaks down, the CPU throttles itself and runs at a glacial speed. You move the virtual machines onto healthy systems and fix the hardware, without application downtime.
- Load balancing. The virtual web server of one of your customers just got slashdotted. Move some of the other virtual machines away, so there is enough capacity to handle the load.
- Flexible maintenance window. You would like to upgrade those CPUs during the daytime, but you cannot shut down the applications used by everybody else in the office. With live migration you can move the virtual machines off each physical system before you perform your surgery.
Consider a stack of machines providing some kind off web service. They're not busy all day. At peak hours, say 9am to 5pm, they might be working at capacity, but at off hours (eg 10pm to 6am), they may be working at only 10% of their capacity. IT staff could dramatically reduce power bills by migrating virtual machines around the server room so that at off-peak hours, 90% of all machines can be shut down. A 5MW machine room running at 0.5MW for even just one third of the day is a significant savings.
Security and performance isolation
Running different applications in their own virtual machines means that if one of your applications starts misbehaving, for example eating up all memory, the other applications on the same system will not get swapped out. This one misbehaving application will run slowly (but it would anyway), but the other applications will continue running like nothing happened.
A similar thing is true when one application gets compromised. Just that virtual machine (or part of it) will be under control of the attacker. As long as the virtual machines are well isolated from each other, which is typically the case in all virtualization technologies where each virtual machine runs its own kernel, the other virtual machines will be safe.
Container technologies, like Linux VServer, Virtuozzo/OpenVZ and Solaris Zones, typically have a lower degree of isolation, in exchange for lower overhead and more flexible resource use.