This introductory chapter is devoted to technologies that are crucial to understanding the main topic of this work, ie the server virtualization services of network devices.
Although the beginnings of computer virtualization date back to the 1960s, the largest flourish this technology has been in the past decade. It is a technique that emulates physical computing resources, such as personal computers, servers, processors, memory, storage devices, network elements, stand-alone applications, or services as virtual machines running within one or more physical machines called hypervisors.
These virtual machines are independent and separate from each other and appear to be physical on the outside.
Virtualization can be divided into the following categories:
∙ hardware virtualization
∙ software virtualization
Virtualization at the operating system level.
This method was used in the very beginning. In this type of server virtualization services, the hardware was handled by the hardware itself to allocate resources to virtual machines, which ensured that the system calls of individual virtual machines did not affect each other.
Because of the absence of the software layer, this is the most powerful method. However, it was abandoned very soon after its application, as it placed high demands on implementation and maintenance. At present, the name of this method is often mistakenly mistaken for hardware-assisted software server virtualization benefits.
The most widespread method is currently software virtualization, characterized by the fact that virtual machines are managed by a software virtualization layer. To ensure the running of hosted operating systems, virtual hardware is created for each of these systems. At present, this is the general concept of virtualization.
The most important part of software virtualization is the so-called hypervisor, sometimes called virtual machine manager (VMM). It is software that takes care of the current running of one or more virtual machines and the emulation of their hardware components.
can be distinguished in two types: native and hosted. The native hypervisor runs directly on the host machine, similar to the operating system. The hosted hypervisor runs as a program within the host machine operating system. Both types of hypervisors are shown in the figure.
Server virtualization services have several implementations: complete emulation, paravirtualization and hardware-assisted virtualization. Full emulation simulates all hardware components of the virtual machine.
This is accomplished by the binary translation method of system calls of the virtual machine for the system call of the host machine.
By this method, it is possible to ensure the operation of an operating system designed for a particular type of processor on a host machine with a completely different processor. For example, an OS for ARM platform processors can be run on an x86 processor.
The advantage of full emulation is high flexibility, thanks to the ability to hold the site as an option even on modern server virtualization benefits platforms, such as VMWare Workstation or Microsoft HyperV.
However, the high demand for performance is a disadvantage, as the need to translate all system calls greatly slows the running of virtual machines.
During paravirtualization, the hosted operating system is virtualized and therefore system calls are done in a different way so that they can be processed more efficiently in the host system.
In contrast to emulation, this output brings significant performance improvements, but it is necessary to modify the virtualized operating system for use.
In the case of an open-source system such as Linux or Free BSD, this is not a big issue. But, for example, with a Microsoft Windows operating system, implementation is dependent on the publisher or special drivers.
Hardware Assisted Virtualization
Between 2006 and 2007, Intel and AMD processors introduced virtualization support into their processors, creating hardware-assisted virtualization. When used, the processor can process a system call to the guest operating system directly, ie without the need for a host operating system or hypervisor or vmware nested virtualization.
This implies that there is no need to virtualize the processor that is shared between host and hosted operating systems. This method also brings a significant increase in performance over total emulation.
Virtualization at the operating system level
This form is also known as container virtualization. Some publications do not even call virtualization in the true sense. There is no physical hardware emulation or self-running hosted operating systems. The distinction between software and container virtualization is illustrated in vmware nested virtualization.
Container virtualization creates so-called containers, which are isolated virtual environments within the host operating system, with its own system resources and the operating system kernel.
Each container has its own address space, file system, processes and equipment. Similar to software virtualization, the container for the outside world looks like a standard physical machine.
To use container virtualization, its support must be included in the host operating system kernel and container management tools must be installed. Also, the container is created in a different way than the installation of the operating system.
A special template is required to run the container, which varies depending on
on the container management tool used. These templates can be created manually, or some pre-prepared ones can be used.
Advantages and disadvantages of container virtualization
The main advantage of the containers compared to virtual machines is almost zero overhead when running and working with them, resulting in their high speed and performance.
Due to this, there is considerable potential for future application and expansion into more sectoral. Container virtualization technology is, however, still relatively new and constantly expanding, which brings with it various disadvantages.
The first deficiency of this method is the dependence of the containers on the kernel of the host operating system. For example, this property does not allow running a Linux container within the Windows operating system and vice versa. However, it is possible to combine many types of containers with an example of a Unix-type system.
The main and very serious drawback of container virtualization can be problems with security. Due to the shared core there is no complete isolation of the containers from host system. In some cases, it can also infiltrate the administration process from the container into the host operating system space vmware nested virtualization.
Distributed container virtualization
At present, containers can be divided into two groups: operating system containers that run the entire operating system with one or more applications, and application containers that contain only one running application.
Picture no. 1.2 in the right part can be seen the differences between the two types of containers. One of the first container platforms was in the FreeBSD operating system in the year In 2000, the so-called jails system was introduced.
Solaris containers were also used in Solaris Zones in 2005 and Linux Containers in 2008. Operating systems containers are used in these server virtualization benefits. Nowadays, the application container platforms, whose most prominent representatives are Docker and Rocket, are now widely expanded.
Microsoft has also introduced support in its Windows 10 and Server 2016 systems custom application containers, which can be managed in addition to Powershell and Docker. At the same time, Microsoft introduced the Hyper-V Container, which is a lightweight virtual machine with only a container.
This combination is designed to address container security issues such as the previously mentioned process escalation with higher privileges in the guest operating system.