During virtualisation projects, servers are typically split into two groups: servers to be virtualised and servers not to be virtualised. As processing power of the virtualisation host has increased, the determining factor between the two groups has moved from processor resource requirements to input/output requirements.
Today, high I/O servers are ruled out of server virtualisation projects because they consume so much of the available I/O bandwidth that the virtualisation host is unable to sustain more than a few other virtual machines -- even though there may be plenty of computing resources remaining. The result is a lowered ROI on the virtualisation project, because fewer of the physical machines in the environment are virtualised. However, you can optimize the ROI their server virtualisation projects with network input/output virtualisation.
The problem: Network I/O bottlenecks
It's important to understand what causes network I/O bottlenecks when implementing server virtualisation. If you have 10 virtual machines all making I/O requests, the hypervisor can quickly become overburdened with handling those requests and maintaining overall performance. However, network I/O isn't the only thing to suffer. Overall system processing and memory are also affected when the hypervisor is busy handling I/O tasks. To avoid this problem, companies like VMware recommend a dedicated 1Gigabit Ethernet (GigE) card per virtual machine
This practice has several problems. Physical host servers do not have enough card slots to support the number of 1 Gigabit Ethernet (GigE) cards needed. Many integrators recommend dedicated network cards for VMotion and virtual infrastructure management. On top of that, cards are required for storage I/O. Moreover, adding cards eliminates the virtualisation benefit of reducing power consumption. A typical 10 by 1 GigE implementation can use about 80 watts of power for the cards. Finally, management of ten 1 GigE cards is a challenge both from a physical cabling perspective as well as from a virtual assignment perspective.
Furthermore, while installing a 1 GigE card certainly increases the I/O capabilities of a virtualisation host, it does not lessen the bottleneck of the hypervisor. Hypervisor bottleneck occurs as the number of virtualised servers increases. The more the hypervisor is interrupted in order to handle the I/O requests of the virtual machines, the less capable the system is of sustaining I/O performance. The I/O subsystem needs to have intelligence added to parse the I/O traffic effectively and to preserve the quality of service for each application.
The solution: Network I/O virtualisation (IOV)
One solution is to apply the concepts of server virtualisation to the network. Essentially, network IOV shares the I/O bandwidth of a network interface card (NIC) across several compute engines -- in this case, virtual machines. Implementing IOV at the NIC level makes the interface appear as multiple interface cards to the host machine.
Network I/O virtualisation can leverage new capabilities in the virtualisation OSes to bring order and control to the 10 GigE. VMware, for example, calls this Netqueue -- a performance enhancement technology within VMware Virtual Infrastructure. It increases multiple receive queues which can be assigned to each virtual NIC. Network adapters are required to support Netqueue. These NICs can divide their bandwidth into hardware channels, and each channel can then be assigned to the different Netqueues. Together, the technologies provide administrators with the capability to individually allocate bandwidth to each of the hardware channels.
These independent channels allow each of the virtual machines to control the virtual network I/O path as if it is their exclusive path, removing the burden of I/O load balancing from the hypervisor. Because functions such as traffic classification can be offloaded from the hypervisor, a virtualised network I/O system will not affect the response time of applications running on VMs because of CPU cycles needing to be allocated to manage Ethernet traffic flow.
Network I/O virtualisation results
The result is that a single card can be divided into multiple channels. For example, a 10 GigE card can function as if it were ten 1 GigE cards with almost no speed lost to latency. Additionally, an allocation of bandwidth can still be shared across multiple VMs. For example, 3 GB of performance can be allocated to the general-purpose, limited I/O virtual machines, and the remaining 7 GB of performance can be allocated as five 1 GigE cards and one 2 GigE card. Failover of the high-performance virtual cards could still occur to the 3 GB general-purpose area, but each individual hardware channel can be reclassified or reset by the hypervisor as needed. With network IOV, a system administrator can achieve 10 GigE line speed performance while offloading the CPU, maintaining quality of service per virtual machine by isolating the network I/O channels.
In addition to improved consolidation and performance, network I/O virtualisationcan improve system resiliency. Network virtualisation solves one risk associated with consolidating to a single or -- more likely -- dual 10 GigE cards. In a nonvirtualised 10 GigE, it is quite possible that a runaway process will consume all the I/O available to that card. With network IOV, you simulate the 10 x 1 GigE configuration, and a runaway process can only consume the Netqueue assigned to it. Also, resetting that queue only affects those machines assigned to that card. In a virtualised environment, it is critical to ensure that one runaway virtualised OS does not suddenly create a network storm that will consume the entire 10 GigE bandwidth.
virtualisation becomes complicated when there is a mix of virtualised and nonvirtualised workloads. The greater that mix, the more complex the overall environment is to manage, which limits the real potential ROI of virtualisation. Network I/O virtualisation enables you to expand the virtualised server count. More of the classic low I/O servers can be consolidated on fewer virtualisation hosts, and a whole new crop of candidates are now available that were not previously considered.
About the author
George Crump is president and founder of Storage Switzerland, an IT analyst firm focused on the storage and virtualisation segments.