- Analysts are not employed by SDxCentral.
- Views and opinions expressed in analyst content belong solely to the author and do not reflect the views of SDxCentral, LLC.
- SDxCentral does not fact check analyst content. If you believe there is a factual error in analyst content, please notify [email protected] Should we find factual irregularities, that article will be unpublished from the SDxCentral website.
Effective April 18th, 2019, the SDxCentral analyst blog syndication program has been terminated.
SDxCentral Statement about AvidThink, LLC
- Roy Chua, the founder of AvidThink, was a co-founder of SDxCentral. As of September 30, 2018, Roy is no longer affiliated with SDxCentral.
- The views expressed by AvidThink and Roy Chua are independent of SDxCentral and do not represent the views or journalistic principles of SDxCentral.
- As of April 18th, 2019, SDxCentral is no longer publishing AvidThink analyst blogs on the SDxCentral website.
It’s clear that in the service provider space, using a standard cloud platform to deliver carrier-class applications may not be sufficient. Service provider applications such as virtual private networks (VPNs), VoIP, and other virtual network functions (VNFs) can’t always be delivered by a best-effort cloud approach.
A virtualized service provider solution must address performance, programmability, security, reliability, and agility, and deliver these with the highest efficiency. Service providers looking to build high performance NFV environments and VNFs – which supply communications and network services from the cloud – will need a wide array of supplementary technologies to upgrade their environments. With NFV migrating network functions from special-purpose appliances to high-volume servers, the server’s network processing capabilities are of paramount importance to the overall NFV deployment performance and efficiency. To meet these goals, standard hardware upgrades are going to require high performance network interface cards (NICs), FPGAs, and co-processors to be added to the NFV model to alleviate some of the potential traffic congestion.
Overcoming the Virtualization Penalty
The main goal of emerging high performance NFV networking is to mitigate different flavors of the virtualization penalty – that is, the performance hits that networks may take when increased numbers of functions are added at the virtual layers of compute and networking.
In addition, several factors contribute to the high network I/O performance requirement on servers in telco cloud and NFV deployments. Cloud and virtualization/containerization often go hand in hand, and with virtualization, more VMs or containers will be running on a physical server, demanding more network I/O bandwidth. In addition, the VNF applications are, in general, I/O intensive, especially when it comes to a high rate of small packets. Cloud application instances, either running in VMs or containers, not only talk to the outside world, but more often talk among themselves.
This kind of intra-data-center traffic is known as “east-west traffic,” which actually dominates modern data centers. This increased east-west traffic puts strains on many components and has the potential to create many bottlenecks. These bottlenecks could occur in the network interface NICs that connect servers to networking devices, or in the virtual switching infrastructure within the VMs themselves, or in the channel between VMs and the host OS – known as the user space (in the VM) or the kernel space (in the host OS).
A generic cloud environment has the potential to spawn hundreds or thousands of chatty VMs that can increase packet traffic loads across many of these bottlenecks. This is why technology solutions such as Intel’s Data Plane Development Kit (DPDK), single-root I/O virtualization (SR-IOV), and advanced NICs have been developed – each addressing different areas of concern.
For example, DPDK is a set of software libraries designed to optimize network processing by the CPU to achieve better packet throughput and lower CPU overhead. But substantial CPU resources still need to be dedicated to I/O processing, which can be problematic when we move into the 25G or even 100G server connection speeds, and when Moore’s Law is slowing down for CPUs. SR-IOV enables applications direct access to a NIC – eliminating the virtual switch layer – which can reduce potential bottlenecks between the VM and the NIC.
And with regard to the NICs themselves, there have been major advances in the industry when it comes to basic virtual networking support such as VXLAN processing offload. In addition, specialized NIC technologies have incorporated FPGA or multicore designs and can offload more advanced encryption/decryption or deep packet inspection (DPI) functionalities.
NIC Performance Re-Examined
Service providers and VNF vendors say that technologies such as DPDK and SR-IOV are becoming standard for NFV implementations. For example, supporting DPDK integration and virtual network offload in a NIC is becoming a standard requirement for many service providers and VNF vendors.
But what further developments are to be expected? First, more work is needed in the virtual switch and NIC area, so that a standard set of APIs can be defined to leverage these offload capabilities, and applications can remain hardware independent. Second, cloud and virtualization management platforms are becoming network-aware, so they can place workloads requiring advanced network capabilities onto the servers with the proper offloading and acceleration support. Last but not least, significant efforts are being put into building more high performance virtual switching and specialized NIC technology such as programmable FPGAs, multi-core ARM processors, and improved memory load balancing among NIC chips.
“We are working with chip and equipment vendors on more requirements that will be useful,” says Anand Krishnamurthy, a co-founder and vice president of engineering for Affirmed Networks, which makes VNFs targeted at the mobile core. “For example, you could have enhanced platform awareness – by which the NIC informs the VNF about its capabilities. Some of the capabilities that could be added included putting TCP/UDP checksum in the NIC, flow-level forwarding, or capabilities like IPSec and TLS crypto. Some functions can be offloaded from the applications to the NIC.”
Another area that Krishnamurthy says to watch is how NICs handle memory sharing between CPUs. NICs have two CPUs, and they need the ability to load balance the connectivity to each CPU, he says.
At the recent MEF16 conference in Baltimore, Jeff Cohen, Azure Network Partner Program Manager at Microsoft, said that smart NICs are key to improving performance for Microsoft Azure. By using FPGAs in a NIC, Microsoft programs the NICs to offload the CPU. “It takes pressure off of the CPU and provides up to 25G [of throughput], with enormously reduced latency and very fast performance.”
The bottom line is that, while technologies such as DPDK and SR-IOV are important to improve the performance of NFV in service provider networks, more developments are likely ahead. Keep an eye on developments in the NIC and virtual switching space as we get closer to building carrier-class virtualization.