The general public gets some pretty good reporting on big events like the OpenStack Summit in Paris this last November 3-7, 2014. I thought SDxCentral readers might appreciate a view of the event from the perspective of SDN networking specifically.
Because of my involvement in the leadership of the emergent Open Platform for NFV (OPNFV) project, as well as in OpenDayLight, the Open Networking Foundation (ONF), and OpenStack, I was able to participate in a wide range of activities and interact with experts from across the stack.
Launched in October 2014 with the intention of building an open-source reference implementation of the ETSI NFV architecture, OPNFV can be described as a mid-stream project. The plan is largely based on pulling component open-source projects from “upstream,” and then adding minimal “glue” and lots of system testing of the resulting platform, based on NFV-specific use cases.
The plan is to base the first release of the OPNFV platform on stable releases available today for the upstream component projects we aggregate. In order to extend the support for NFV in upstream component projects, we will need to work in those upstream communities, planning changes/extensions and building them so that OPNFV can inherit them downstream in a future release.
The ETSI NFV architecture includes a cloud orchestration platform component that takes advantage of the generic functionality that has been packaged into the various solutions available for creating, e.g. infrastructure-as-a-service (IaaS) clouds such as Amazon AWS. While the OPNFV project does not wish to favor one existing platform over another in general, the fact remains that to build a reference implementation it is necessary to run on one specific cloud orchestration platform, not on a general class of solutions.
For this purpose, the community has chosen to start with OpenStack in the cloud orchestrator role. Nothing prevents contributors from creating and resourcing a project to create an alternative reference implementation using an alternate cloud orchestration platform if they would like to do so.
Although the focus of this blog is on the network specifics, I will briefly touch on compute and storage needs from OpenStack as well because of their central value in achieving the needed elastic compute cloud infrastructure. Occasionally people have to remind me that the cloud is really just an IT wrapper for compute; that’s what people are really trying to do. I look so much at the networking aspects of providing a seamless, elastic compute experience that I almost forget about the compute and storage legs of the cloud stool.
Many of the middleboxes that we need to virtualize have challenging performance requirements because they need to be able to touch many packets at “wire-speed” in some seriously fast (terabits and more) networks. As a result, OPNFV will need to exploit state-of-the-art technology for accelerating the execution of virtual machine workloads that are touching network traffic in the data plane. Techniques to optimize hypervisors, network drivers, hardware offload engines, and more will need to be included in OPNFV.
In addition to the need for high-performing VM execution platforms, NFV optimization will require the ability to control the placement of VMs on host instances based on communications patterns expected by the workloads. To support this, the OpenStack VM scheduling engine needs to become aware of considerations other than efficiency of compute resource utilization.
Similarly, as a result of the scale and breadth of service provider networks and the needs for metering, billing, SLAs, etc. there is a need for adoption of cutting-edge massive, distributed, secure storage platforms and technologies. Fortunately, such requirements for OpenStack are not unique to NFV and there is a lot of existing work in the community. Hopefully this is a “shopping” problem, not a space where something new is needed.
The other piece of making NFV useful is the networking component that causes the traffic to proceed through the virtual functions in a particular sequence, based on a variety of conditions. If the middleboxes are now a bunch of VM workloads, my remaining requirement is agile, programmable networks to make service delivery, scaling, and resiliency a product of an automated software engine to reduce operating costs dramatically. This is where the networking subsystem of the cloud orchestration and its connection to a network controller become critical to the success of NFV, and where it enters the domain of my SDN-focused lens.
When we talk about OpenStack NFV networking, there are two major issues:
- Making the network features in the VM environment high-performing
- Providing a powerful, flexible engine for automating the creation of the forwarding device rules needed to implement the “service function chain”
Service Function Chaining
With respect to the network, service function chaining is where the bulk of the action is in NFV. In the emerging OPNFV system architecture, an SDN controller will expose a northbound interface (NBI) to the NFV orchestrator to allow the service chaining needs of a set of workloads to be communicated using an intent-driven API (communicating what virtual functions need to be included, not how to build the network).
The solution requires an API from the SDN controller that allows the NFV orchestrator to build service chains portably, without any details of the specific protocols and technologies needed to implement the forwarding behavior for the chain.
Ultimately, the goal is to be able to describe the needs of communicating workloads to the SDN controller in terms that include specific guarantees (service level agreements, or SLAs) for bandwidth, latency, jitter (variability), cost, etc. By hiding the details of the implementation inside the SDN controller black box and providing a portable description of the communication needs, we make these advanced features available to non-experts who don’t care about network implementation technology, just business requirements.