- Analysts are not employed by SDxCentral.
- Views and opinions expressed in analyst content belong solely to the author and do not reflect the views of SDxCentral, LLC.
- SDxCentral does not fact check analyst content. If you believe there is a factual error in analyst content, please notify email@example.com. Should we find factual irregularities, that article will be unpublished from the SDxCentral website.
Effective April 18th, 2019, the SDxCentral analyst blog syndication program has been terminated.
SDxCentral Statement about AvidThink, LLC
- Roy Chua, the founder of AvidThink, was a co-founder of SDxCentral. As of September 30, 2018, Roy is no longer affiliated with SDxCentral.
- The views expressed by AvidThink and Roy Chua are independent of SDxCentral and do not represent the views or journalistic principles of SDxCentral.
- As of April 18th, 2019, SDxCentral is no longer publishing AvidThink analyst blogs on the SDxCentral website.
As the network functions virtualization (NFV) revolution comes to service provider and cloud communities, there are some concerns about this new technology. One of the major questions is how to design enough performance in NFV to keep pace in high-subscriber, mission-critical environments.
Can NFV live up to the performance expectations of the most demanding networks, including global service providers? There is evidence that there is more work to do to transform this IT technology – but some key technology tools are emerging to put enough performance in NFV to perform for the most demanding applications, including communications.
The advantage of NFV is that it can deliver network functionality via software running on industry-standard, commercial off-the-shelf (COTS) hardware. Any number of software services can be deployed on this generic hardware, which can assure interoperability and flexibility.
A common tradeoff for generic, industry-standard hardware is that it may not perform as well as something that is custom built. This is an age-old issue in the networking space, where, for example, networking hardware providers have for decades contemplated the use of industry-standard network processors vs. spinning their own custom chips.
With NFV, the benefits of interoperability and flexibility appear to be overweighing the potential performance cost, as service providers look to build a network and applications architecture for the future. And it appears that enough technology progress is being made to overcome the performance limitations of servers and software that have customarily stemmed from NFV limitations.
Performance Anxiety and NFV
Let’s start with a look at why NFV environments elicit performance anxiety, and where it comes from. The core technology for NFV – virtual machines (VMs) running on x86-based servers – emerged from the enterprise world. VMs are designed to “spin up” instances of an operating system that can run applications for an enterprise customer, and then “scale out” by adding more VMs and, if necessary, servers, to keep up with new subscribers.
Certain applications in a service provider environment – for example, mobile services – require the capability to handle millions of subscribers. In addition, real-time communications applications have more stringent requirements than, say, a Web server.
If a VM fails, there is a process for replacing it or moving it to another server, a move that might take seconds or even minutes in standard cloud environments. But what if you are running a virtual evolved packet core (vEPC) VNF on a mobile network? Many service providers talk about their applications having less than 50 millisecond latency – a demand that clearly pushes the boundaries of standard VM and NFV technology.
There is some consensus that certain enhancements will need to be made to existing NFV technology to get it to the next level of reliability and reassure global service providers that they may rely on NFV for the next generation of infrastructure. This is going to be a key differentiator among NFV technology providers over the next 12 months, as service providers have made it clear that they haven’t settled on NFV infrastructure (NFVI) platforms – and they will switch when things go wrong.
As demonstrated in SDxCentral’s recent “2016 Mega NFV Report Part I: MANO and NFVI,” a wide range of technological solutions to build NFV platforms is emerging. Some technology vendors are taking specific steps to build “carrier class” into their platforms – and for good reason. The demands of communications apps will no doubt require stout NFVI gear that can handle connections scaling to 40 Gb/s and later 100 Gb/s throughput. And cloud providers will be seeking the same kinds of scale and reliability.
A couple observations from that report: An NFVI solution needs to support the environment in which it will be deployed; and it is critical to understand the limits of performance in NFV and the bottlenecks that might exist in the virtualization layer. Other questions that are commonly asked include: What kind of scale does the overall system provide? Does it have its own management system that can handle multiple instances? How is load managed?
Tools for Performance in NFV
When we dived down into specific technology details, it was clear that many NFVI vendors are taking unique approaches to adding performance enhancements to the NFVI platform. Some of the popular emerging technologies include data plane development kit (DPDK); integrated virtual switching technology such as Open vSwitch (OVS) or embedded switch (eSwitch) functionality; and single-root I/O virtualization (SR-IOV).
It’s clear that we’re in the early days of “carrier-classifying” NFV – but the means to do this exist. The question is how far the market will go in developing carrier-class NFV solutions. The market is going to have to develop solutions that balance high performance standards with interoperability, which is what many in the service provider community are looking for. The technology vendors that can differentiate themselves along these lines will be the most successful.