SmartNICs: Give your OpenStack network a boost

Software-programmable network interface cards combine CPU offload with central control, providing more speed and flexibility than SR-IOV or DPDK

Among the key findings of an April 2016 survey by the OpenStack Foundation was the strong interest among OpenStack users in software-based networking on servers, or server-based networking. The factors driving this interest include SDN/NFV (mentioned by 52 percent of respondents), containers (mentioned by 70 percent), and bare-metal deployments (mentioned by 50 percent). This level of interest is reflected in practices of the largest datacenters in the world (for example, Google Cloud, Amazon Web Services, and Microsoft Azure), which use server-based networking for the networking flexibility and scalability it provides.

Enterprises are increasingly interested in how server-based networking can best be deployed. With growing bandwidth requirements and the adoption of 10GbE and higher-speed technologies across industries, it has become widely acknowledged that using general-purpose CPUs for server-based networking tasks is highly inefficient. Microsoft, for one, has expressed interest in using software-programmable network interface cards, aka SmartNICs, to scale its server-based networking infrastructure more efficiently. Netronome and Ericsson have demonstrated that SmartNICs can be used to offload and accelerate the server-based networking data path, demonstrating up to a sixfold improvement in TCO for some use cases.

OpenStack networking today

Two specific uses cases have led to improvements in OpenStack networking performance, but with compromises: single-root IO virtualization (SR-IOV) and Data Plane Development Kit (DPDK).

SR-IOV. One way to improve OpenStack network throughput is to use SR-IOV plugins with SR-IOV-capable server NICs. When this is implemented in OpenStack Neutron using traditional NICs, bandwidth delivered to virtual machines is improved significantly and latency for VM-to-VM traffic is reduced, as is the need to use CPUs for networking tasks. However, with such increased performance, the number of server-based networking features is severely limited, as shown in Figure 1a and 1b.

Figures 1a and 1b show the packet data path from a network port on the server to a VM. In Figure 1a, high bandwidth and low latency are available for a limited set of features, rendering the solution feasible for a narrow set of applications. When more advanced features not supported by SR-IOV (such as overlay tunneling or security groups) are required, one needs to run through a server-based data path as shown in Figure 1b, resulting in poor performance and high CPU usage.

openstack sr iov

Figure 1. 

DPDK. Some of the above challenges can be addressed by moving the server-based networking functions such as Open vSwitch (OVS) or Contrail vRouter to user space and running them on top of DPDK to allow for more software optimization and less context switching between user space and kernel space. This approach, shown in Figure 2, delivers additional server-based networking features (compared to SR-IOV) and improved performance (compared to a kernel-based networking data path). However, DPDK’s benefits come at the cost of high latency and high CPU usage. Further, with DPDK, a large number of flows in the networking data path (as needed in security, load balancing, and analytics) can severely degrade performance.

Yet another issue with DPDK: When the OVS data path is run in user space instead of kernel space, it is common for users to modify the code to improve performance and augment functionality. This results in a loss of synchronization with the mainstream version of OVS in the Linux kernel.

openstack dpdk

Figure 2. 

Server-based networking with SmartNICs

A third approach is to use a SmartNIC to accelerate and offload the server-based networking functions. In this scenario, the SmartNIC can implement all of the required server-based networking functions directly, then deliver packets directly to VMs. One such configuration, shown in Figure 3, also encompasses the Linux Conntrack technologies to provide an extended set of features at high performance (high bandwidth and low latency) and high efficiency (low server CPU usage), freeing CPUs to handle more VMs.

openstack smartnic

Figure 3.

A SmartNIC like the Netronome Agilio intelligent server adapter can be used to enable the features that are needed in a broad array of workloads and deployment frameworks, while overcoming the performance and flexibility problems of DPDK and SR-IOV. In fact, the SmartNIC offers numerous benefits over these approaches, as shown in Table 1 below.

netronome table

Table 1. 

Server-based networking using SmartNIC hardware is also well suited for bare-metal container deployments. Features such as networking virtualization, service chaining, load balancing, security, and analytics can be implemented and provisioned from outside the domain of the operating system running on the server. In this case, control plane orchestration using an SDN controller or OpenStack is implemented directly with the SmartNIC (Figure 4).

openstack smartnic containers

Figure 4. 

There are various options for controlling the SmartNIC in this scenario. Control messages can be sent either in-band on the main network interface, out-of-band using a separate Ethernet interface for control, or out-of-band using another interface such as NC-SI or SMBus. A local control agent running on the SmartNIC will communicate with a centralized SDN controller using a protocol such as OVSDB or OpenFlow. The agent typically runs on some form of general-purpose processor, such as an ARM CPU, that is integrated into the SmartNIC. The agent responds to commands from the SDN controller to insert and delete rules to implement forwarding, security policies, and other actions.

The near future of OpenStack

For all of this to be a reality, the OpenStack networking plugin specification will have to be enhanced beyond the SR-IOV-based capabilities that exist today to take advantage of more advanced server-based networking capabilities and acceleration. Netronome is taking a leadership role in the development of these enhancements with industry leaders such as Mirantis, Ericsson, and Juniper Networks. A draft open specification covering these enhancements is expected in Q3 2016, and this will be contributed to the OpenStack community for further feedback and eventual acceptance for industrywide adoption.

Nick Tausanovitch, vice president of solutions architecture and silicon product management at Netronome, is responsible for cloud datacenter applications of Netronome’s intelligent server adapter products.

New Tech Forum provides a venue to explore and discuss emerging enterprise technology in unprecedented depth and breadth. The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Send all inquiries to newtechforum@infoworld.com.

Copyright © 2016 IDG Communications, Inc.