The key attributes of network functions virtualization (NFV) are well documented; NFV separates the application from the underlying hardware to enable better utilization of the hardware, resulting in improved scalability and reduced capex and opex. To date, many of the associated discussions have focused on porting network functions to a new virtualized environment. However, each network function will also have a unique set of performance requirements. These requirements become even more critical when we consider control plane virtualization versus data plane virtualization.
Control plane applications, even when encased in a larger function (think evolved packet core, or EPC), have historically been executed on x86, so the effort to squeeze in a virtualization layer should be relatively small. The key here will be to insert the virtual infrastructure management, as defined within the European Telecommunications Standards Institute (ETSI) NFV Industry Specifications Group (ISG).
Most of the data plane functionality deployed today utilizes non-x86 architectures, either in the form of merchant silicon packet processing or in more expensive and custom application-specific integrated circuits (ASICs) that are built for a specific function. Therefore, the most significant challenge for virtualizing the EPC is on the data plane side. In addition, the desire to migrate to a homogenized commercial-off-the-shelf (COTS) environment does not eliminate performance requirements of data plane applications.
Rather, it presents a unique challenge on the path to the promise-land of NFV.
Most mobile operators are already aware that the “standard” virtualization deployment model of: Standard server -> Hypervisor -> Guest OS -> Application works primarily for the aforementioned control plane apps. In order to reach the higher performance required by data plane applications, the below techniques can be used.
- Intel’s Data Plane Development Kit (DPDK) has received wide publication so no need to re-hash here.
- Single-Root I/O Virtualization (SR-IOV) is a technique introduced by Intel in 2008, when virtualization was first catching on in the enterprise space. SR-IOV helps deliver packets to the correct virtual machine (VM) much faster. However, it limits the VM migration capability across platforms as it does not allow switching of packets between VMs.
- Open vSwitch (OVS) with DPDK does allow switching of packets between VMs, but more work is needed to close the gap between virtualized vs bare metal performance. OVS enables the SDN model to be applied at the VM level, and it aids VM migration by allowing VM’s networking state to be migrated to a new OVS instance. VM’s networking state might include its tenant (VLAN), latency/bandwidth and other QoS information. Most scenarios require a choice of using SR-IOV or OVS. They can coexist but are not interchangeable (A VM can either be part of SR-IOV or OVS domain).
Operators seeking to virtualize their data plane application should follow the below tips:
- Understand performance bottlenecks.
- Utilize the tools available and understand their usage models. The tools above can be used in combination, as is the case with OVS-DPDK, to improve performance. However, in other cases, a selection has to be made. For example, SR-IOV and OVS cannot be used together within the same VM.
- Understand other options for optimizing performance. For example, I once encountered a scenario where hyperthreading had to be disabled to optimize performance in certain cases. This creates a unique challenge, as disabling hyperthreading is a BIOS setting, meaning that a certain portion of the data center needs to be allocated toward providing VMs to this application.
Performance optimization will be an area of growing interest as the data plane is virtualized, making the exploration of new techniques and functionality paramount.