Driving Open Standards in a Fragmented Networking Landscape

Once upon a time, standards were our friends. They provided industry-accepted blueprints for building homogeneous infrastructures that were reliably interoperable. Company A could confidently build an application and — because of standards — know that it would perform as expected on infrastructure run by Company B.

Standards have somewhat fallen out of favor as the speed of digital innovation has increased. Today innumerable software applications are created by innumerable developers at an accelerating pace. Standards — once critical for achieving interoperability — have failed to adapt in this brave new world.

And while the speed of innovation has accelerated, the speed of adoption has remained essentially unchanged. Many organizations adopt new technologies but don’t have the time to transition completely from their legacy infrastructure models to new ones. Over time this leads to a diverse array of technology silos. Some are different because of the programming languages on which they’re built: Java, Python, Ruby, Go, etc. Some differ on the cloud infrastructure management platforms they use: vSphere, OpenStack, Amazon Web Services (AWS), Azure, Google, etc. Another diversity vector is the compute paradigm: containers, virtual machines (VM), bare metal, and more. Worse yet, these silos complicate getting answers to simple operational questions like how much it costs to run a certain application, who is running which applications, and where.

The number of permutations is overwhelming, and each has advantages and disadvantages for different use cases and business purposes. Unfortunately, using standards as a blanket approach to drive cross-platform compatibility and interoperability simply doesn’t work in an environment where change is happening so fast.

The telecom industry, for example, is very standards-driven. Over the years multiple groups have been formed to develop standards for specific elements of the telco stack. The most notable are ETSI, MEF, and TMForum. The challenge with this approach is fragmentation among these projects. Silos and a lack of interoperability complicate finding a way to agree on an end-to-end standard that will be consistent across all the layers of the stack, even for an industry where the applications are relatively similar.

As application portfolios in telecom have become more diverse, operators have coped by buying turnkey, proprietary solutions from single vendors. This no longer works as well as it once did, because there simply isn’t one vendor or a single solution that can cover all the needed use cases. Thus, barriers to interoperability are creeping in.

Fortunately, over the past few years we have begun to see an emergence of many open source projects — OpenStack, NoSQL, Docker, Kubernetes, ONAP — that are becoming an alternative to those standard bodies and are shaping the new telco networking stack. Open source provides a more agile alternative for driving de facto standards, where adoption becomes the main measure for success.

Let’s take a look at the enterprise experience as an example. Enterprises were standard-driven up until a decade ago (to wit: SQL, OMG, Java EE). Today this is no longer the case. Standards that were led by standard bodies have been replaced by open source projects, which have been deemed de facto standards by virtue of their widespread adoption.

Open source standards have numerous positive attributes. First, the process is more democratic, because every developer can participate and contribute. Second, the influence of political agendas is minimized. Finally, the process is significantly more agile and responsive to innovation. The code is king, and there is no need to arrive at a full consensus to make progress. Furthermore, whereas specifications can be interpreted in many different ways and are therefore never enough to ensure compatibility, code provides a single source of truth and, by definition, ensures interoperability.

But open source standards are not without their challenges. Often, there is very little interoperability among the different open source projects, which leads to the very silos that we’re trying to eliminate.

Jonathan Bryce, executive director of the OpenStack Foundation, called out this issue and made it the theme of his keynote address at the OpenStack Summit last month in Sydney, Australia. He said, “the biggest problem in open source today is not innovation; it’s integration.”

A recent SDxCentral report on “2017 Open Source in Networking Report” provides a fairly detailed coverage of the topic and specifically the relationship between open-source and standard:

“The familiar waterfall model breaks down for software-centric solutions, especially as the update cycle is converging on continuous and systems are designed to be tailored into distinct environments. A more iterative lifecycle is needed that blends specification with implementation and accelerates the overall process. While a radical change is necessary, the end goal remains the same: multi-vendor interoperability.”

So how do we find a happy medium and claim the benefits of both open source and standards in order to insure integration and ultimately, scalability?

Open Source Should Drive Standards, Not the Other way Around.

To illustrate this point, let’s compare the standard-driven and open source-driven approaches.

Standard-driven: ETSI had a very important role in the network functions virtualization (NFV) industry. It defined a common architecture view on what an NFV system should look like, and created a common taxonomy. However, the actual products that claimed support for this architecture were vastly different from one another, and there was no real compatibility or interoperability among products, even though they all claimed ETSI support.

Open source-driven: ONAP is taking a different approach, using open source as a vehicle to lead for common standards. ONAP first took an open source operator point of view to define the architecture. It is now taking the relevant parts from different standard bodies and integrating them into the architecture. Indirectly, this forces a stronger collaboration by the different standard bodies, which are now aligning themselves with ONAP.

As a comparison we can see the difference between the scope of ONAP versus the scope of ETSI. ONAP is covering the end-to-end architecture where ETSI is limited in scope.

Secondly, We Should Define “Just Enough Standards.”

Standards should focus on interoperability among the various open source projects or cloud infrastructures and less on implementation. We still need standards, but the scope of standards needs to shift from defining the underlying architecture through low-level, detailed specifications to “just enough standards” to ensure interoperability among projects that do not need to conform to the same standard or API.

We should also allow integration and interoperability among standards or frameworks that are already used, rather than trying continually to look for the new standard.

A good analogy for this can be seen in the manufacturing industry. Boeing was able to scale its 787 Dreamliner manufacturing pipeline by distributing the manufacturing of each sub-system to different manufacturing plants across the globe. In order to do that, the plants had to agree on some degree of common modeling to describe the different parts. This allows Boeing to assemble those parts as if they were all manufactured in the same plant.

The IT industry requires the same thing. It needs to move away from trying to define the implementation details of each part to defining a “just enough standard” to allow the industry to interoperate at the sub-system level. So, rather than dealing with how we spawn VMs or configure a specific network device, we need to focus on interoperability among systems and services. Within this paradigm at the end of the day the most important role of standards is not avoiding lock-in (as it has been), but rather providing a higher degree of abstraction to enable enough interoperability to allow automation at scale.

More specifically, with “just enough standards” we focus on:

The Topology and Orchestration Specification for Cloud Applications (TOSCA) project provides several good examples of “just enough standards” in action. TOSCA provides a fairly loosely coupled modeling which can be extended easily to fit specific project needs.

The bottom line is that we need to accept that “the only constant is change.” Innovation in software can bring many good things, but we need to learn how we can eliminate the silos, guard against new ones forming, create better interoperability, and simplify operational complexity. The examples above show that by taking a programmatic approach to standards, this degree of interoperability can be achieved even today.