How a yaml validator works?

These formulas can be very general, and can be used for different types of configurations, apart from the Tacker templates an. Cloudify templates. For example, to create a virtual router using Tacker, you need several templates for different types of its operating system.

In the case of Salt, only one formula that has the type of operating system and possibly other attributes is parameterized.

This parameterization is done using the Jinja template language. The values ​​of the selected parameters are replaced by variables whose values ​​will be set from the so-called Pillar, which is passed to the status when it is called. The Pillar is a metadata structure in YAML format that serves to specify the parameters assigned to the formula. This means that when using different pillars on the same formula, other actions and configurations will be performed by the state.

Jinja also has the great advantage that although it is not a programming language, it can be used to make simple decisions or cycles. This brings great flexibility to the formula, making it much more universal and shorter.

For example, iterations of cycles can be effectively managed by repeating parts of the configuration, only a parameterized block of configuration is defined within the formula, and any number of its elements can be defined in the pillar. Examples 3.6 and 3.7 provide a better insight into this issue.

At first, it might seem that the deployment of the Salt is complicated and requires the creation of all modules, states, formulas and pillars. It is true, however, that the community that is developing Salt is working hard to ensure that all necessary components necessary for the operation of the cloud infrastructure are already implemented.

Most features, such as creating virtual machines or network devices, managing applications such as web or file servers or databases, are ready for immediate use.

In most cases, you only need to define the pillars with the required values ​​and call the relevant modules or states.

Modules and states can be easily expanded or programmed. They are all
very clearly written in Python. New modules can be used immediately after creation, and in the case of new states, formulas must also be created.

Each minion holds static information called Grains, which includes, for example, information about hardware, operating system, network, etc. This information can be extracted from the minions and help them determine where they will be executed or applied.

This way, modules and states can be called, for example, only on Linux minions in the Ubuntu Distribution. The minions can also be saved into the grains, which they often filter when dialed. Examples of rollers can be: webserver, fileserver, database, etc.

Further information on minions is also stored on salt-masters in the so-called Salt Mine. It contains non-static information, such as the current service setting. These are updated at regular intervals to keep them up-to-date.

Salt and NFV

Despite the fact that Salt was not originally designed for NFV, it can be easily used as a VNFM and NFVO system. Vendor VNF life cycle stages can be designed as states. Unlike Tacker and Cloudify, unfortunately, it does not natively provide automatic bug fixes.

However, this functionality can be achieved by utilizing the Reactor and its own modules and states. The Master Program Interface also provides the ability to control OSS and BSS systems to ensure full integration into the NFV framework.

Summary
Jump is a very powerful orchestration tool not only for NFV but for the whole environment
datacentre. It is possible to install and configure almost any cloud software. Great advantage is its large scalability, managing in some production environments thousands of minions at a time. This is accomplished by the so-called MultiMaster operation, where the minions guard at the same time more masters.

In addition to servers, it also manages to manage some physical network devices, making this feature a great asset. The disadvantage of the open version of the Salt may in some cases be the absence of a native graphical interface, but it does not make much sense in the massive deployment for which Salt is intended. This feature is only available to its paid version called SaltStack Enterprise.

Infrastructure and virtualization

Infrastructure and Virtualization Infrastructure (NFVI) consists of physical devices such as servers, storage, and network elements, and from the virtualization layer or hypervisor that is mostly part of a cloud platform.

The group of all physical devices located in the same location on which the NFV can be operated is the so-called NFVI point-of-presence (NFVI PoP). In the case of running one NFV frame on more geographically different locations, there may be more NFVI PoPs, each in one location.

Supporting technologies for NFV

Because of the physical network emulation, the network packet processing power, and therefore the network throughput of virtual machine cards, drops considerably.

Together with network virtualization, technologies have been developed to streamline packet processing to reduce the power card’s power loss between physical and virtual devices. These technologies can be compared to the principle of hardware-assisted software virtualization.

DPDK

Data Plane Development Kit (DPDK) is a set of source code libraries specified for faster processing of packets on Intel Xeon processors. Intel has released these libraries as open source under BSD licenses, and is therefore freely available to software developers.

These libraries have very quickly found their way into the most widespread virtual network components, such as Open vSwitch, Contrail vRouter or Brocade vRouter. Intel indicates that the packet processing performance of DPDK is up to ten times higher than without it.

SR-IOV

Single-Root Input / Output Virulation (SR-IOV) is an Intel technology that enables virtual network cards to be created within one physical hardware level.

To use this functionality, it is necessary to support both the hypervisor and the network card. The network function in SR-IOV is a separate set of PCIe interfaces, primarily address space and base registers, which for the surrounding face as a separate physical network card.

Two types of network functions are distinguished:
Virtual Functions (VF) contains only minimal resources PCIe interfaces that are important for data transfer Physical Function (PF) includes complete PCIe resources, it also has the ability to control all SR-IOV functionality. Full the role of a physical network card.

VFs can be created automatically and their number can dynamically change. Each of these can be assigned to a separate physical machine. PF occurs within one network card only once and is only assigned to the guest operating system.

SR-IOV greatly reduces the overhead of packet distribution between virtual machines. There is no need for the hypervisor to process all the packets that he would then divide between the relevant virtual machines.

The network card can help VF communicate with virtual machines on their own and thereby reduce the overhead and, of course, increase the bandwidth and number of processed packets. The principle of functioning of SR-IOV is described in more detail in FIG. 3.8

Comments

comments

TOPlist TOPlist VIPLOG database valid