Tackling NFVI with ODL and OpenStack
Though it has been making progress, many who have deployed OpenStack have seen that the Neutron project mechanics can lead to state synchronization issues at scale as well as multi-forwarder issues and lack of high availability and critical features. Massive commitment to OpenStack and community action will ultimately fix these and other issues. In the interim, time passes and instability ensues. Offloading the expansion and complexity of network functionality, like policy expression and the rendering of SFC primitives to ODL’s GBP engine, may be a wise separation of functionality. This allows OpenStack to focus on what it does well (Nova) and some fundamental plug-in connectivity (in Neutron) – and the community to make progress with NFV and SFC.
As illustrated in Figure 3, we can create the following division of labor between OpenStack and OpenDaylight:
- Placement: OpenStack provides a mechanism to place workloads
- Policy: OpenDaylight provides a mechanism with GBP to place EPs into EPGs, and define the policies that apply to traffic between pairs of EPGs
- Chaining: OpenDaylight provides a mechanism with its SFC project to specify and realize service function chains
To demonstrate this division, a project has been started with components in OpenDaylight and OpenStack to enable NFV and policy-driven SFC. The result is the flow illustrated in Figure 4.
One of the keys to the project is that it minimally extends OpenStack neutron to provide the ODL GBP module with service chain primitives. GBP is, in turn, being extended to understand SFC concepts (chains). Using this OpenStack neutron service binder, GBP provides the necessary policy/renderer infrastructure that binds these elements (policies, VNF types, VNF order, etc.) together in creating SFC expressions for different environments (GBP is a ”multi-renderer”). This allows an NFV developer and/or operator to understand and use service graphs and most importantly service assurance associated with the service.
“Full Stack” Developers as Operators
NFV involves radically compressing the temporality of networking, taking that which previously took days or weeks or months and compressing it down to minutes or seconds. Everyone understands that this means automation. That’s not news or a radical concept. The implication of automation is that when human intervention is needed to debug a problem, it will require humans who can quickly traverse the stack from the top to the bottom to sort out the issue. Temporarily demands will not tolerate a traditional “open a trouble ticket, and iterate ownership of the problem” approach (where responsibility is partitioned by layer, component, or code).
The full stack job responsibilities go from the application layer and into the infrastructure and orchestration layer. Of course, this will include the service creation level through policy, SFC rendering, logical and physical networks, and software-forwarders and potentially extend down into the host kernel. To work in this environment, you will have to be “full stack” aware – capable of understanding, developing, deploying, testing, maintaining, and potentially patching all levels. Both developers and operators in this environment are going to need a robust but simple toolset that cuts across and through the stack that provides secure management and visibility at multiple layers, in an easy-to-consume manner. Read: integrates with existing development and operator environments but does more than automate them. New service assurance and service layer management tools fed by analytics may provide an aggregate console into the services deployed, with drill-down capabilities. But, the developer or operator of NFV solutions may ultimately have to interact with the “full stack” at the application layer.
The “whole stack” demonstrates the potential merger of open-source agility and the debate in standards development organizations (SDOs), to create both informed standards and readily consumable solutions in our rapidly evolving network environment. The code and structure in the aforementioned ODL/OpenStack project is an example of the base from which we can have intelligent discourse about open frameworks, modular components (and Open APIs). As applied to NFV, we’re seeing that open-source projects trend toward a peak technology competence. Instead of flattening one of them into a general-purpose toolkit, they can be loosely coupled into agile, best-of-breed frameworks.
I continually hear that NFV solutions need to be open. I strongly agree, and particularly that it not become just a bullet on a marketing slide. An open solution can be built around OpenStack, ODL, Linux, Containers/Hypervisors (or bare metal) and OVS at the infrastructure level (e.g., OPNFV). The evolution to SFC and its impact on NFV are just one example of the importance of that stance. However, true openness requires patience. We should avoid “forking up” open-source projects in the pursuit of the most recent NFVI architecture changes and performance enhancements (by building and deploying private distributions). Forking OpenSource is the nemesis of the industry and developer/operator communities.
Policy-enabled service chaining allows operators to quickly and simply create complex service graphs without complex orchestration or network configuration. The “whole stack” concept used to enable this functionality demonstrates the agility to absorb ongoing change. It allows us to move the NFV discussion and focus beyond the ongoing cycle of performance and architectural enhancements to the transformative potential of NFV service creation.