This article will explain the various types of virtualisation software that are available, and discusses the various hardware and software techniques that enable virtualisation. Contributed by National Instruments
It is important for engineers evaluating virtualisation technology to understand how virtualisation software works.
Virtualisation is a technology that many engineers are evaluating with the goals of lowering cost, reducing footprint, and creating better integrated systems. In order to make informed decisions on the use of virtualisation in measurement and automation applications, it is important to understand the basic virtualisation architectures available and the technologies that make them work behind the scenes. This article will present an overview of this information and help readers form a virtualisation plan for their engineering systems.
It is assumed that the reader has a basic understanding of virtualisation technology and its benefits.
Engineering applications can make use of two major virtualisation architectures, hosted and bare-metal. Each architecture has different implications on I/O access, determinism, and ease of use that should be considered before building a virtualised system. This section will present an overview of each architecture, its mechanism for communicating with I/O devices (since this is a major factor in engineering applications), and other benefits and drawbacks.
In this architecture, a base operating system (such as Windows) is first installed. A piece of software called a hypervisor or Virtual Machine Monitor (VMM) is installed on top of the host OS, and allows users to run various guest operating systems within their own application windows. Common products that use this architecture today include VMWare Workstation and Parallels Desktop for Mac. A diagram of the hosted virtualisation architecture is shown below.
In the hosted virtualisation architecture, each virtual machine (guest operating system) commonly only has access to a limited subset of I/O devices. The host operating system retains ownership of the physical I/O connected to a given computer, and the VMM provides an emulated view of the actual hardware (when possible) to each Virtual Machine (VM). Because the VMM does not have knowledge of most non-generic I/O devices such as PCI data acquisition cards, it does not present these emulated devices to VMs. Only generic devices like network interface cards and CD-ROM drives are emulated.
In addition, many hosted virtualisation solutions support passthrough functionality for the USB port. This feature allows users to access USB devices from individual VMs directly, providing for limited I/O capabilities beyond the emulated devices mentioned above. For example, it may be possible (using hosted virtualisation software) to access NI USB data acquisition devices from a guest OS and acquire data.
In actuality, several software components work together to make I/O possible in a hosted virtualisation architecture. For example, the VMWare Workstation product directs I/O requests from virtual machines through a low-level VMM component, then through a driver, and finally to a user-level application component called VMApp. In the end, the VMApp component passes I/O requests through the host operating system. The key point to remember is that I/O requests are ultimately passed through the host OS in a hosted virtualisation architecture.
One benefit of using a hosted virtualisation architecture is ease of installation and configuration. For example, the VMWare Workstation software can be set up in minutes by running a basic installer in Windows. Once installed, an engineer can create several virtual machines that run different operating systems – all on the same physical computer. In addition, VMMs that use hosted virtualisation commonly run on a wide variety of PCs. Since a host operating system provides drivers for communicating with low-level hardware, VMM software can be installed on most computers without customisation.
As mentioned above, hosted virtualisation architectures are not capable of emulating or providing passthrough to many PCI I/O devices. In addition, since I/O requests from virtual machines must be directed through a host OS, performance can be degraded. Another drawback to hosted virtualisation is the lack of support for real-time operating systems. Because the underlying host OS dictates scheduling amongst its applications and the VMM, it is usually not possible to run a real-time OS inside of a VM deterministically when using hosted virtualisation.
In light of the benefits that hosted virtualisation provides, it is commonly used for testing beta software (to eliminate the need for a dedicated testing machine), or to run legacy applications. Hosted virtualisation also provides quick support for running different operating systems on one PC, which can be useful for engineers who need frequent access to various applications written for these different operating systems.
The second common architecture for virtualisation is bare-metal. In this architecture, a VMM (also called hypervisor) is installed that communicates directly with system hardware rather than relying on a host operating system.
Bare-metal virtualisation solutions provide a number of options for I/O access from virtual machines. The reader should note that because bare-metal virtualisation does not rely on a host operating system, a hypervisor using this architecture can communicate with I/O devices directly. For I/O devices to be shared between virtual machines (eg: ethernet, hard drives, and so on), the hypervisor software must contain a low-level driver to communicate with the device. It must also be able to emulate each shared device for guest virtual machines.
Another way that bare-metal hypervisors can approach I/O access is to assign individual devices to specific virtual machines. This is called partitioning, and can greatly improve I/O performance and support for engineering applications. Since partitioned I/O devices (such as PCI data acquisition boards) can be accessed directly from VMs using their native drivers, less intervention by the VMM is needed.
Benefits And Drawbacks
In addition to the improved I/O performance possible when partitioning devices such as PCI data acquisition boards between individual VMs, bare-metal virtualisation architectures have the benefit of supporting real-time operating systems. Since they do not rely on an underlying host operating system, bare-metal hypervisors can implement features to bound interrupt latency and enable deterministic performance. This means that engineers using bare-metal virtualisation can run real-time and general purpose operating systems in parallel on the same processing hardware.
Bare-metal virtualisation does, however, have some drawbacks that should be considered. Any drivers needed to support various hardware platforms must be included in the hypervisor, in addition to drivers for devices that will be shared amongst virtual machines. Furthermore, since bare-metal hypervisors are not installed on top of a host OS, they are typically more difficult to install and configure than a hosted solution.
The low-level nature of bare-metal hypervisors and the I/O access they provide makes them useful for deployed applications that use multiple operating systems. Specifically, applications that must provide real-time data processing and provide access to general purpose OS services (such as a graphical user interface) can benefit from bare-metal virtualisation.
When implementing a system that uses virtualisation, a first step should be to determine whether a hosted or bare-metal architecture makes more sense (or a combination of both). Overall, hosted virtualisation can provide many benefits during the development process including reducing the cost of beta testing software, running legacy applications, and supporting applications written for different operating systems. The limited I/O capabilities provided in most hosted virtualisation solutions should be considered carefully when designing deployed applications.
On the other hand, bare-metal virtualisation provides several benefits such as real-time OS support and increased I/O access with partitioning. Integrating a bare-metal hypervisor into an application, however, may require additional installation work and careful consideration of underlying hardware support.
Thus far in this article, VMM software has been treated as a black box that supports running multiple operating systems in parallel. This section will introduce several of the underlying techniques that may be used by a VMM to isolate individual virtual machines from computer hardware. Note that software solutions using either of the virtualisation architectures mentioned above may use any one or more of these techniques.
It should be noted that all of the techniques mentioned have the same end goal: to intercept any virtual machine instruction that could affect system state (shared resources) in any way. Each technique is simply a different way of achieving this goal.
VMMs that use binary translation dynamically alter code that is executing to avoid affecting system state. Any time a virtual machine’s compiled code contains a privileged instruction (eg: accessing an I/O device), the underlying VMM can use binary translation to appropriately redirect the I/O request and prevent conflicts between individual VMs. Hosted virtualisation software typically makes use of binary translation; one example is the VMWare Workstation product previously mentioned.
Because performance is degraded whenever code must be translated (any switch from a virtual machine to the VMM), virtualisation software that uses binary translation typically inspects and translates groups of instructions at a time. By minimising the amount of times that the VMM must interfere with virtual machine execution, software using binary translation can minimise the performance impact a user experiences.
Rather than modifying VM code as it is running, the hardware assist approach uses special processor technology to avoid changing system state upon privileged instructions. Both Intel and AMD have started including new virtualisation features in their processors (called Intel-VT and AMD-V) to automatically call a VMM when necessary. Many bare-metal hypervisors make use of this technology.
As is the case with binary translation, performance can be degraded each time that a virtual machine’s execution must be interrupted by the hypervisor. To minimise this impact, processors that incorporate hardware features for virtualisation can be configured to only interrupt VM execution when absolutely necessary. For example, if a specific I/O devices is partitioned then it may be accessed by its assigned VM without hypervisor intervention.
One final technique for accomplishing virtualisation is called paravirtualisation. This method involves explicitly modifying an operating system so that it is ‘aware’ of being virtualised and automatically calls the underlying VMM when necessary. These calls are commonly referred to as hypercalls. While paravirtualisation can improve performance greatly by minimising the number of times that a VMM must be called, it requires gaining access to OS source code so that modifications can be made.
When incorporating virtualisation into a measurement or automation application, either hosted or bare-metal VMMs can be used. Each architecture has different implications on I/O, ease of installation, determinism, and other factors that must be considered. At a low level, these architectures depend on techniques such as binary translation, hardware assist, and paravirtualisation to accomplish virtualisation.
As virtualisation becomes more common in the engineering world, it is increasingly important for engineers to understand virtualisation technology, the basic types of virtualisation, and the underlying techniques that make it possible. By making informed decisions, designers can optimise the performance of virtualised systems while balancing I/O and other needs.