Software implementation according to NFV

Picture no. Figure 4.1 illustrates in which parts of the frame the individual software components are located. As a NFVI and VIM, OpenStack cloud platform with SDN was expanded to OpenContrail. The SaltStack tool served as NFVO and VNFM.

Selected architecture components

OpenStack has been chosen because of its popularity and a large number of implementations in production environments, a very stable and tuned platform, and is therefore well suited for cloud implementation with NFV.

OpenContrail is directly available to use the NFV because it has been developed for its use from the outset. Compared to the standard Neutron OpenStack network solution, OpenContrail is far more scalable and more stable in a large number of thousands of virtual machines.

A big advantage over Neutron is distributed in the Router, in practice it means that communication between computing servers that host virtual machines can happen on the go.

This communication leads through Neutron via so-called network nodes, which are servers that provide all communications over virtual networks. This approach leads to the direction of all network traffic in one location, and therefore, in a large number of instances, to its overloading and eventual collapse.

The distributed distribution used eliminates this problem because the load is distributed equally across all computational nodes, resulting in better scalability, stability and, of course, higher network throughput.

OpenContrail offers some additional capabilities in addition to Neutron, such as service chaining, load-balancing, analytics, and more advanced firewalls.

When using OpenContrail or other SDN technology, Neutron will not be completely removed, but its functionality will be disabled for network infrastructure operation and replaced by the selected SDN technology.

The Neutron Application Interface remains in operation, and the function of the currently used SDN technology is called when calling its function. This approach is especially suitable for maintaining compatibility with external applications, graphical interfaces, or orchestrations that the Neutron application interface uses.

SaltStack was chosen because it offers the widest possibilities for orchestration. In addition to managing NFV, it’s easy to control the entire cloud environment. In addition, it provides very good native support for OpenStack.

Preparation of VNF

At this stage, it is assumed that OpenStack and OpenContrail are installed and fully operational. The procedure for installing these technologies is lengthy and beyond this work. However, the installation procedures are described in detail at and

Avi Vantage

As the VNF, the software load-balancer Avi Vantge was chosen by Avi Networks. The complete procedure for its manual installation is described here. The aim of this for example, to replace this manual procedure with an automated salt process.

Load-balancer takes care of the even distribution of load, typically from service users, among all the servers that provide it. It only has meaning if the identical service on the network provides two and more servers.

Avi Vantage can be divided into three parts:

∙ Service Engine (SE):
It is a virtual machine that takes care of the burden distribution itself. He’s listening on IP addresses and service ports that it balances, handles requests from users, and then redirects them to servers that provide the requested service. In order to make load-balancing meaningful, it is important to have more SEs so that not all network traffic concentrated in one location.

∙ Controller:
Avi Controller is the main configuration and control point of the entire Avi system Vantage. It is concerned with creating new SEs at high load and low output again reducing their number. Typically, it is run as a cluster of three instances, which provide high availability.

∙ Consoles:

This component provides a graphical web interface and application program REST interface for integration with external applications.

The installation allows three operating modes. For this example, the Avimanaged LBaaS mode was selected, where a special tenant with a user with administrative rights is created for Avi Controller, able to create AVI SE in the cloud users. Users can then administer the Avi SE in their tenant. The following installation procedure is followed:

1. Creating a tenant for controllers.
2. Create at least one instruction for controller instance.
3. Upload the controller image to Glance.
4. Create Neutron Network Management for Controller and SE.
5. Create a security-group with the specified firewall rules.
6. Create a controller instance and associate them with an IP address floating.

Creation of formula and pillar for Salt

Requirements are set, now comes to the formula of the formula itself, which will reflect the above procedure. Salt state formulas have a ‘.sls’ end, so the file named ‘avinetworks.sls’ is created in the directory where the Salt formula is stored.

Directories for storing formulas and pillars are defined in the Salt-Minion configuration file. The first step will explain the code syntax for the written formula. Since the pillar and YAML formulas, which use the offset from the edge of a line for the separated related parts, the structure will be described by the offset level.

First, a pillar is defined, which represents the structure of the metadata passed to the formula. It enters parameters that will vary for each instance of this VNF. The first level contains the role of the Salt, the second level includes a list of its subroutines.

In this example, this is the role of ‘avinetworks’, which contains one controller. The subroutine already lists the variables, specifying whether the subroutines are enabled (the ‘enabled’ parameter). The ‘identity’ parameter specifies the name of the endpoint of the Keystone authentication service, at which endpoint authentication data is stored under this status.

Before you start with the individual steps, you must define at least the basic condition, whether the role is enabled. In the following example, you can see Jinja’s syntax for decision support or cycles. It is also noted that the pillar parameters are accessed by means of dotted notation. All of the following code will be placed inside this decision.

Step names are defined on the first level and the names of the modules in the Salt are specified at the second level, which when called this step is called. On the third level parameters for these modules are passed. Right at the first parameter on the third row is the example of a language variable, which is located with two pairs of compound parentheses,
here again, through the dot notation, the pillar parameters are accessed.

The first step is created by the tenant for the controllers. The second step is to add an admin user to a previously created tenant. For every call to the Salt module, you must pass the ‘identity’ parameter for authentication and authorization. Another interesting parameter is ‘require’ on the 15th line of the following sample.

This parameter contains a list of steps that condition the execution of the current step. This can greatly speed up the execution of the error states. If one of the steps fails, then the other steps that depend on it will not be done at all because they would also fail because of dependence. With the remaining parameters, it is easy to derive meaning from their names.



TOPlist TOPlist VIPLOG database valid