CloudNFV proof-of-concept approved by ETSI
Just before the holidays, the ETSI NFV Industry Specification Group approved the proof-of-concept (PoC) application submitted by the CloudNFV initiative. This makes CloudNFV the first PoC to be officially approved in this way, and all the participating companies are hard at work to deliver the next phase of the program. Tom Nolle, the founder of CloudNFV, covered this news in a comprehensive post that explains the significance of the approval, the objectives of the initiative, and the details of the companies involved. No need to repeat all that information here.
Architecture provides end-to-end NFV implementation
As explained in the actual ETSI application, the CloudNFV architecture is a data-model-driven implementation of NFV based on a flexible software platform that provides considerable latitude in creating “virtual” databases and coupling them to agile software processes using information collected from a wide variety of sources, including MIBs. This flexibility means that nearly any standard or open interface can be presented at any appropriate point with only minimal development.
It is that capability that has allowed the team to implement the NFV specification while that specification is still a work in progress. Some assumptions have been made on interfaces and data structures, but it will be straightforward to conform to the final specifications published by ETSI as they become available.
The implementation of CloudNFV is running in Dell’s Solution Center in Santa Clara, Calif., though it’s not necessary to run CloudNFV from that location; testing and development has been done remotely from the sites of the various participants.
Data-plane performance is key to cost-effective deployments
From the perspective of telecom service providers, the high-level objectives of network functions virtualization (NFV) are pretty clear. First, NFV should allow them to save money, partly through reduced capex (commodity hardware platforms) and, more significantly, through reduced opex. The opex savings should primarily come from improvements in network resource allocation thanks to virtualization as well as from reduced network management costs due to increased automation. Additionally, service providers hope to make more money both by improving time-to-market on current services and through the creation of innovative new services, targeted at both enterprises and consumers, which leverage cloud-based services running in the network. Reduced costs plus increased average revenue per user (ARPU) equals happy shareholders, in theory at least.
For these promises to be realized, though, it’s critical that the basic cost-per-subscriber for network infrastructure doesn’t increase as a result of the move from physical to virtual functions. If this fundamental metric can’t be held at a level comparable to today’s networks, then no service provider is going to adopt NFV and take an immediate hit to their baseline cost structure, regardless of the promises of future riches. Unfortunately, though, when you start virtualizing network functions using standard software architectures, there’s a significant impact in this area.
Data-plane performance challenges in CloudNFV
There are two fundamental data-plane bottlenecks that must be addressed in order for an NFV implementation, such as CloudNFV, to be cost-effective.
The first bottleneck is the virtual switch running on the server platform. This virtual switch must provide sustained, aggregated high-bandwidth network traffic to the Virtual Network Functions (VNFs). Standard virtual switches such as the open-source Open vSwitch (OVS) do not deliver adequate performance or scalability.
The second bottleneck is the performance of the VNFs themselves. Within an NFV deployment, VNFs such as a virtual evolved packet core (vEPC) or virtual broadband remote access server (vBRAS) must deliver cost-performance that is comparable to that achieved by equivalent physical implementations. Otherwise, NFV deployments will not be cost-effective, and there will be no ROI justification for a transition to NFV.
In this context, three specific issues have to be addressed:
- The bandwidth of the communication between the host OS running the virtual switch and the guest OSs running the VNFs has to be improved.
- The switching performance of the virtual switch itself has to be accelerated.
- The data plane performance of the VNFs within the VMs has to be improved because of the poor performance and limited scalability of standard OS networking stacks.
For an NFV implementation to be cost-effective and viable for widespread deployment, these problems have to be solved. In addition, the solution must ensure predictable networking performance across a range of loads and connections, which is critical for assigning virtual resources to services without compromising SLAs.
Real-world solutions for performance bottlenecks
The CloudNFV architecture addresses these performance challenges by incorporating 6WIND’s data-plane acceleration software, both in the virtual switch and in those VNFs that need significant data-plane processing.
Widely deployed today in telecom infrastructure and network appliances, the 6WINDGate™ software maximizes the performance of Layer 2 through 4 protocols through a fast-path architecture. Performance-critical data-plane processing functions are executed on dedicated processor cores, avoiding the networking overheads and latencies that are inherent in the Linux kernel. The architecture also ensures that performance scales linearly according to the number of cores configured to run the fast path.
In the context of the three problems outlined above, 6WINDGate first maximizes the bandwidth between the host OS and the guest OSs through an enhanced driver for the Intel Data Plane Development Kit (DPDK), based on a shared memory architecture. This driver is instantiated in the host OS running the hypervisor as well as in the guest OSs running the VNFs.
Second, 6WINDGate processes virtual switch functions in the fast path to boost its switching capabilities. When applied to the virtual switch in an NFV deployment, 6WINDGate achieves a 5x to 10x acceleration for the baseline Layer 2 switching function, compared to the performance of a standard virtual switch, depending on the number of cores running the virtual switch. This performance improvement is totally transparent and does not require any change in the virtual switch software. Combined with the shared memory driver, this performance improvement can be achieved in a virtualized environment.
6WINDGate typically delivers 10x the performance of the standard Linux networking stack, with seamless scalability.
As a result of optimizations for virtualized environments, 6WINDGate delivers a level of performance when running under a hypervisor which is comparable to that achieved when running in a physical implementation.
The architecture of 6WINDGate also guarantees predictable performance across loads and connections, enabling service providers to ensure SLA compliance without resorting to over-provisioning.
Analysis using an Intel Sandy Bridge platform has shown that, through the use of 6WINDGate, the performance of critical networking functions such as IPsec is reduced by only a few percent when migrated from a physical implementation to a virtualized environment. (This particular analysis will be covered in full detail by a joint 6WIND-Intel report to be presented to the ETSI Performance working group at the next ISG meeting.)
Minimizing the performance impact of migrating network functions to virtualized environments, 6WINDGate enables service providers to control that all-important cost-per-subscriber metric that we discussed earlier. Upcoming CloudNFV demos will showcase the performance of the architecture for data-plane-intensive VNFs.
What’s next for CloudNFV?
Following the approval of the CloudNFV PoC, you can expect to see formal results appearing this quarter. The team is hard at work on the next phase, with a plan to report results starting in February, and to contribute four major documents to the ISG by mid-June. Additional PoCs are planned, with details to follow soon.
Check out more about 6WIND on SDxCentral: