Glenn Seiler is vice president of product management at Wind River. Glenn is responsible for creating and managing networking product initiatives that solve critical problems for Wind River customers, developing solutions for NFV and the network transformation. Glenn has been managing platform software for more than 30 years and has been a strong advocate for open source and open standards. He was also a founding member of the Carrier Grade Linux Workgroup and a contributor to the SCOPE Alliance Carrier Grade OS committee. Glenn also acted as the Vice President of the OpenSAF Foundation. In this captivating interview, he shares his position on topics ranging from open standards to mitigating the risk of open NFV solutions.
SDxCentral: What’s your definition of “open?” We’ve covered different forms of open on the site, from open source, to open standards, to open APIs. And why is open particularly relevant to telecom equipment manufacturers (TEMs)?
Seiler: In many ways, telecom leads other vertical industries (like defense or industrial or medical) in the requirement for open because of the inherent interoperability that is required for a communications network. For example, a single mobile call can often transfer between multiple service provider networks, and between different equipment manufacturer equipment. So standards, and especially open standards, are critical to the success of the overall industry. Also, because equipment stays in the network for so long, service providers need to be “future proofed;” they need to be confident that their equipment will interoperate and be compatible for years to come.
In order to do that, there are really two essential characteristics of open:
- It’s important to use open APIs that are fully compatible with all relevant industry standards, either de facto or industry standardized (e.g. 3PPP).
- Service providers need to be assured that there will not be any vendor lock-in at the component level; for example, the cloud software or hypervisor. This is why the use of open technologies such as OpenStack and KVM are so popular in telecom solutions today.
Which standards are important in an open network functions virtualization (NFV) strategy? And are they sufficiently developed as of today?
Seiler: First, it is important to separate NFV into its three main categories: NFV infrastructure, virtual network functions (VNFs), and management and organization (MANO). Most of the open source effort today, with OPNFV, is focused on NFV infrastructure. This includes the storage, compute, and networking components that make up the cloud. These areas are fairly mature, with a few notable exceptions like the vSwitch. Standards at the VNFs (service functions) are much less mature. In fact, there really are no standardized APIs today, and the de-facto standard seems to be around APIs provided by OpenStack. While that is a good start, there is still a lot missing. For example, full high-availability (HA) services for VNFs, software management functions, and other traditional operations, and administration and management (OAM) functions are not well implemented as standards for NFV today. Finally, the area of MANO is critical for the success of NFV. This area is getting a lot of attention, and there is a lot of effort underway by ETSI to define standards for MANO. But there are gaps there as well, such as the need to integrate orchestration services with established legacy operations support systems (OSS) and business support systems (BSS). There really are no standards being defined in those areas.
In which areas have the standards not evolved sufficiently? Why do you think they have not evolved?
Seiler: Well, this was really addressed in the previous question. But to summarize, APIs for the VNF services need to extend beyond what is available today in OpenStack and provide more traditional telecom services than what OpenStack provides today. In MANO, the area of legacy integration with OSS, BSS, and edge management systems (EMS) will continue to be difficult because those services were always very vendor-specific in the past, so there is no real standards base to build on or interface to.
What are the key components in an open NFV solution? How does a TEM go about assembling such a solution?
Seiler: We discussed the three primary components of a VNF solution: NFV infrastructure, which includes both hardware and software for storage, compute, and network virtualization; VNF services (e.g. the applications); and MANO. There are many different strategies for assembling these solutions. One strategy that is being employed is the “do-it-all-yourself” strategy, where an equipment manufacturer may have their own NFV infrastructure platform (hardware, software, or both) and their own management solutions. Almost always they will have some core VNF solution that is their area of expertise — for example, an SBC solution, an IMS solution, or an EPC solution.
I think the most common method, however, is for a TEM to outsource the NFV infrastructure layer, both software and hardware, because these are industry-standard, often commoditized components that they do not want to develop themselves. Often the TEM will leverage a third-party ecosystem product for the orchestration component as well. Almost always the TEM will have their own set of value-added VNFs, coupled with a VNF manager that interfaces with their EMS. Often these VNFs will be augmented by other third-party ecosystem VNFs.
What are the risks and pitfalls in building an open NFV solution? What are mitigating actions a TEM can take?
Seiler: There are a number of risks in building any NFV solution. Building an open NFV solution removes some of the risk for the reasons we mentioned earlier — future-proofing, compatibility, and so on. But even building on open components, the biggest pitfall is simply the time it takes to integrate, validate, and package a complete solution. This can easily take 12 to 18 months, and in this transforming market, that could mean the difference between success or failure. Our experience tells us the cost of this sort of undertaking involves, at a minimum, an initial investment of $15 million. This is just for the core components, not to mention ongoing maintenance and upkeep of the software, development of the VNFs, and creation of an orchestration solution. That is why most TEMs outsource the component-level NFV solution.
What’s the first step for a TEM to take as they look to address the NFV market?
Seiler: We’re seeing many different strategies. Some TEMs are moving to a pure VNF strategy. In short, they are moving away from the practice of providing a “whole solution” including optimized hardware platform, application(s), and management systems. Other TEMs still offer complete solutions, but they usually include “OEMed” components from hardware and software vendors that provide industry-standard components. The strategy really depends on whether the TEM has a broad business with many different business units; in these cases they will usually create a common platform and leverage industry-standard high-volume components to get to market quickly and provide a whole solution to service providers. Other smaller TEMs often will take a VNF-only position and work with any hardware or OS platform. This has certain advantages of not carrying component and integration costs, but has other validation, testing, and go-to-market issues.
What most common business reasons for deploying NFV are you seeing from your clients?
Seiler: This really raises the question of what is the overall business value of NFV to the service providers. For a long time, the standard answer to that question was lower capex, lower opex, and increased service agility. But because NFV often requires a transition of hardware and software to take advantage of virtualization, the initial capex costs are often not lower, but higher. It will take some time for the true value of lower opex to be realized, until all the services can be fully automated and fully integrated with legacy management. So the initial benefits of NFV are turning out to be longer-term benefits, but still very real and required for the long-term health of telecom. What is really driving a lot of the strategy today is the creation of a large ecosystem of suppliers. This disrupts the traditional supply chain and removes the single-vendor lock-in. We are seeing a brand new wave of suppliers (VNFs, NFV infrastructure, and orchestration) that can move with more agility than traditional suppliers, gives the service providers new options for delivering innovative services, and is really delivering on the transformation of telecom to NFV.