- Analysts are not employed by SDxCentral.
- Views and opinions expressed in analyst content belong solely to the author and do not reflect the views of SDxCentral, LLC.
- SDxCentral does not fact check analyst content. If you believe there is a factual error in analyst content, please notify [email protected] Should we find factual irregularities, that article will be unpublished from the SDxCentral website.
Effective April 18th, 2019, the SDxCentral analyst blog syndication program has been terminated.
SDxCentral Statement about AvidThink, LLC
- Roy Chua, the founder of AvidThink, was a co-founder of SDxCentral. As of September 30, 2018, Roy is no longer affiliated with SDxCentral.
- The views expressed by AvidThink and Roy Chua are independent of SDxCentral and do not represent the views or journalistic principles of SDxCentral.
- As of April 18th, 2019, SDxCentral is no longer publishing AvidThink analyst blogs on the SDxCentral website.
We’re observing a common phenomenon in the network functions virtualization (NFV) world called ‘VNF Islands.’ This term refers to a problem that service providers face when they make the move to NFV and deploy specific virtual network functions (VNFs) from different vendors, often at different times to launch different services. As part of a service (VNF)-driven strategy, service providers have tried to ease into NFV by rolling out services as proofs-of-concept which expand into production deployments. And invariably, many of these deployments consist of an integrated NFV stack from one or a few vendors.
Often each collection of VNFs that form these rely on a different variation of OpenStack, KVM Hypervisor and sometimes even a different certified hardware. The result is that the service provider ends up with disparate VNF islands that then must be tied together.
In a new VNF report from SDxCentral, which is part 3 of our NFV report series, we think VNF islands are one of the issues that service providers will be trying to overcome in the next year. One of the key elements to overcoming these islands may be open source software and standards. Focusing on making the NFVI and MANO more portable and open through projects such as the Linux Foundation’s ONAP, OPNFV, ETSI OSM, and other efforts around NFVI, is critical to unifying these islands. And that’s why we are seeing service providers gravitate toward these open source initiatives.
Of course, SD-WAN is one of the most prevalent NFV use cases today. In fact, SD-WAN and virtual CPE (vCPE) have been two of the driving forces behind NFV deployments in the past year. We’ve also seen an increase in efforts around virtual evolved packet core (vEPC) and virtual IP multimedia subsystem (vIMS) as operators prepare their networks in advance of 5G.
But even though service providers want to move faster to NFV, the path is still fraught with obstacles:
- Hardware vendors don’t want to over-invest in VNFs because they fear they will displace their profitable hardware offerings.
- The business model behind VNFs remains in limbo as vendors struggle with how to create profitable licensing businesses that allow service providers to use VNFs from different vendors on a single infrastructure and orchestrate those VNFs.
- Proprietary VNF managers (VNFMs) are needed for different vendor VNFs, and that not only makes it harder to deploy but also impacts total cost savings.
Because of these issues, NFV still remains largely in trial status. Nevertheless, I believe that the technology will make progress in the year ahead as service providers attempt to bring together their VNF islands through open source projects. Those types of initiatives may even help propel some of the slower moving standards bodies toward a specification.
In the year ahead, I expect to see NFV move from POC to production, and the barriers to NFV will start to subside as this momentum grows.
You can download the complimentary 2018 VNF Report referenced above. And if you missed out on the first two parts of this report, download them now: Part 1 NFVI + VIM and Part 2 MANO, LSO, and Assurance.