Geeks, large businesses and server farms “get” virtualisation, but both the concept and its application can be...
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
a challenge for the SME.
So if you're not a geek, large business or server farm, where does virtualisation fit in the IT mix?
Virtualisation isn't a new idea (it has its roots in the world of the mainframe), but it's a relatively recent development in the world of Intel-based platforms.
Those of us who have experienced catastrophic application crashes – the blue screen of death in Windows, a kernel panic on a Mac, and even Linux users can have applications die in a way that returns them to the login screen, as I can attest from personal experience – know that the frustration isn't merely the inconvenience. It's all the lost application data in documents and spreadsheets that you didn't have time to save before things went pear-shaped.
When the crash happens in a server, the catastrophe goes beyond inconvenience: the downtime affects all the users.
The problem is that an operating system crash doesn't just burn the application that caused the crash, it takes out all the applications that are running. And there's the first point of virtualisation: what if you could run one and only one application in the operating system? That way, only one application is vulnerable to a crash. Since you don't want to buy five servers to run five applications, you need a way to run separate operating systems on the one server.
That's one point to virtualisation. The other is about the efficient use of resources.
In many environments – and particularly in small and medium businesses – most servers are idle most of the time. A simple file server spends most of its day waiting for users to save or open files; a mail server is only active when users are sending or receiving mail. Even the office VoIP server doesn't work too hard: voice calls aren't particularly demanding, and in most offices, only a few phones are in use at any given moment.
Because we don't want the phones, fileserver, e-mail server and database to depend on each other, we separate these into individual servers, which means wasted capex and opex.
There's the second point to virtualisation: as long as the server machine has enough I/O and processor capacity to handle multiple tasks (which it probably has), if you can run one application per operating system while still only using one “box”, you save money both on system purchase and on electricity and administrative burden.
A third reason – relevant to Web server farms but not so much the SME – is that different customers can be hosted on the same machine, isolated from each other because they're running on different virtual servers.
So: how does virtualisation work?
The idea is that instead of booting a single operating system on a machine, and running multiple applications on that operating system – which is what desktop users are accustomed to – the “virtualised” system runs several “instances” of the operating system, and then runs different application environments in each instance of the operating system.
Should an application crash in “Server 1”, the other instances don't crash.
The key to this is that instead of loading the entire operating system, as happens on an ordinary user's machine, the virtual server only loads a small operating system layer (often referred to as a “microkernel”. This is an operating system of sorts, but much smaller (in the Microsoft world, for example, the virtualisation microkernel is only about 100 kilobytes of code).
The microkernel acts as a combination traffic cop and booking agent for the “real” operating system: it loads the operating system/s, manages the CPU, memory, network and disk access for different OS instances, and makes sure that the different processes don't interact.
The microkernel is the heart of a virtualised environment: because of it, you can load up many server operating systems, each running as if it were a single operating system on a single server.
For the typical SME, the dominant server operating system is Microsoft's (although this is changing with the growing popularity of Linux-based servers for VoIP and storage servers), so TechTarget spoke to Microsoft technology evangelist Jeff Alexander to walk through Microsoft's virtualisation world.
“Microsoft has virtualisation in a number of areas”, Alexander says, “with Virtual PC on the desktop, application virtualisation to separate different versions of the applications from each other, and server virtualisation, with Systems Center VM manager in the middle.”
For server environments, virtualisation is based on Microsoft's Hyper-V, which has been shipping in Windows Server 2008 since mid-year.
To encourage experimentation and adoption, Microsoft took the step of offering Hyper-V separately from Windows Server as a download.
“It's a good entry point,” Alexander said. “You can download the server, get it up and running fairly quickly, and get your hands dirty without any outlay.”
The download version “doesn't have the full feature set, but it provides a basic solution for getting started.”
In Hyper-V, the Windows Hypervisor code provides the microkernel functionality: “It sites between the operating system and the hardware, and it uses a simple partitioning functionality with a parent partition and child partitions.”
Recognising the way Linux has penetrated in “black box” business environments like VoIP servers, Microsoft allows various 32-bit and 64-bit flavours of Linux to launch as operating system instances in Hyper-V alongside Windows server, but at the time of writing, Microsoft's Hyper-V Website only listed SUSE Linux as a supported “guest” operating system.
The other trick to virtualisation in the Hyper-V environment, he said, is to separate the hardware drivers from individual child processes. Instead of each “child” process loading ordinary device drivers (for the mouse, the disk, network, display and so on), Microsoft has developed “synthetic” drivers that load into the virtual environment. These communicate with Hyper-V, which handles communication with the devices themselves.
SMEs will have to work out what is suitable for virtualisation. Alexander said it would not be suitable for a large SQL server, but on the other hand, common office operations such as file/print servers and Web servers virtualise very well.
“The Microsoft Exchange and SQL Server teams are now working on virtualisation guidelines,” Alexander said. “For these environments, there are big dependencies on hardware such as I/O and disks.
“But most servers only work at low utilisation – around 15%.”
Microsoft is offering a downloadable to try and help users identify and plan their virtualisation decisions.
Apart from the already-mentioned resource consolidation, there is at least one more reason to consider virtualisation even in the SME.
If you have a number of servers in an office, it's quite likely that any one of them could bring work to a halt if there's a sudden and unexpected crash. All the worse if it's hardware, you can't get a replacement until tomorrow, and nobody has any spares...
But if the SME consolidates three or four server machines down to one, then machines can be substituted for each other, providing at least a minimalist disaster recover strategy for those who can't afford to build huge data centres with lots of gigabit connections between them.
It would be fair to point out that virtualisation isn't a Microsoft fiefdom. In fact, because the OS is more visible in the Linux world, it has a long history under free operating systems as well.
However, given the limited space left to me in this article, the rich world of Linux virtualisation may have to wait for another day.