Containers continue to gain traction across both enterprises and service providers alike. They represent the preferred new method of developing, packaging, and deployment of new web applications and are viewed as critical in future edge computing deployments with the rise of initiatives like 5G, IoT (Internet of Things) and MEC (Multi-access Edge Computing).
At the same time, containers are relatively new to the vast majority of developers and IT professionals. At the recent DockerCon (June 2018) in San Francisco, Docker revealed their audience survey indicated 50% of attendees had started with containers in the last year. This points to a large population still needing education, assistance, and an early market.
To attract these new developers, we’ve seen efforts by Docker to make installation and deployment easier, along with similar efforts from the public cloud giants to ease container deployment and management—AWS launched Fargate and Microsoft Azure launched ACI. These container-as-a-service approaches (along with serverless or function-as-a-service) are gaining mindshare, though many in the community still need to understand the ramifications of these different architectures (microservice-based) on their applications. Porting legacy applications to this new framework can be a daunting if not a fruitless task. However, the benefits of improved infrastructure efficiency, better scalability, application agility and DevOps-friendliness make containers a strong architectural choice for new applications.
Cloud computing technologies have been with us for more than 10 years, but in the last few years cloud computing has captured significant market and mind share. Market research indicates that over a quarter of US and European enterprises use public clouds this year, with that percentage increasing to 44% by 2021. Our own survey shows that 72% of our readers use private clouds this year, but more significantly, 71% use public clouds, up from 43% just three years ago.
Along with uptake and use of hybrid (private and public) and multi-clouds (use of more than one public cloud), cloud orchestration and automation has become more critical. Public cloud providers are extending their reach into private clouds and private cloud providers are doing likewise into public clouds (Azure Stack, VMware/AWS partnership, Google/Cisco/VMware). There’s also increasing diversity in flavors of clouds consumed, from IaaS (infratructure-as-a-service) and PaaS (platform-as-a-service), to container-as-a-Service (CaaS) and increasing interest in server-less technologies (sometimes called functions-as-a-service).
Enterprises and service providers alike see containers as an innovative and scalable way to package and deploy applications across multiple clouds, both private and public. Cloud orchestration solutions, along with DevOps and automation solutions need to accommodate container options. Popular orchestration platforms for private clouds, OpenStack and VMware vRealize, include support for containers. And many have chosen to incorporate or integrate with the open source Kubernetes, which has emerged as the leading container management and orchestration solution (now adopted by Docker, AWS, and Microsoft), and orchestration vendors are scrambling to further integrate with cloud-specific CaaS like AWS Fargate and Microsoft’s ACI.
Over the course of the last 5 years, since ETSI and the world’s leading communications service providers (CSPs) launched the ETSI ISG for NFV (Network Function Virtualization) in early 2013, vendors and CSPs alike have been working to make NFV a reality. Disaggregation of network services from proprietary hardware, deployment of a fully virtualized infrastructure, modern cloud designs and advanced, agile orchestration and management all promise lower CapEx and OpEx, with improved agility and time to market.
In 2018, the momentum behind NFV continues, even as reality sets in that full NFV adoption will take longer as CSPs slowly transition their infrastructure, business processes and culture—NFV adoption is not simply a technology problem. Regardless, NFV, along with cloud adoption, will continue its uptake and we will cover the state of NFV across our three report series. Our 2018 reports will cover a wide range, from industry-standard commercial off-the- shelf (COTS) hardware as NFV Infrastructure (NFVI), to hypervisors and the Virtualized Infrastructure Managers (VIMs), to Virtual Network Functions (VNFs) and the management and orchestration (MANO) necessary to deploy these functions.
This specific report covers the latest in NFVI and VIM development, the key foundations of any NFV deployment. We continue to see diversity in NFVI hardware, from the dominant standard 19-inch rackmount servers to other forms like blade servers and an increasing focus on hyperconverged architectures (HCI), especially for edge deployment supporting 5G and IoT services. As well, alternate compute platforms based on ARM architectures battle for CSP deployments with x86 architecture from both Intel and AMD are popping up at the Edge and on customer premises equipment (CPE). We’ll cover more on the CPE, vCPE (virtual CPE) and uCPE (universal CPE) trends in our SD-WAN and Virtual Edge report later this year.
Since the European Telecommunications Standards Institute (ETSI) and the world’s leading communications service providers (CSPs) launched the ETSI ISG for Network Function Virtualization (NFV) in early 2013, vendors and CSPs alike have been working to make NFV a reality. As part of the update to our annual NFV report series, we are publishing three reports. The recently published report on NFV Infrastructure (NFVI) and the Virtualized Infrastructure Managers (VIMs) is available for download. This report will cover the management and orchestration (MANO) necessary to deploy these functions, as well as the topic of service assurance. And the upcoming third report will cover Virtual Network Functions (VNFs) and NFV use cases.
The momentum in 2018 for NFV and NFV MANO continues, with significant developments across multiple fronts. As CSPs slowly transition their infrastructure, they are all realizing that NFV adoption not only involves technology but business processes and culture too. And the portion of NFV that ties in closely with business processes, the traditional BSS/OSS systems, is NFV MANO.
Over the past year, we’ve seen significant momentum behind Linux Foundation ONAP (Open Network Automation Platform). More carriers have announced support for it, and many vendors are signed up to be part of the ecosystem, though some may be taking a wait-and-see approach to figure out the real-world traction before investing more than just marketing resources. Regardless, Verizon’s support of ONAP, as well as Bell Canada’s deployment of ONAP modules, lends weight to ONAP, beyond the original deployments from AT&T and China Mobile. There is still significant work required to develop ONAP before it’s clear whether the technology is commercially deployable in environments outside of AT&T where ONAP was incubated and spent its formative years. And it remains to be seen what happens with ETSI’s OSM (Open Source MANO), which effectively competes with ONAP, though there are discussions around collaboration.
Communication service providers (CSPs) worldwide continue to invest in virtualization of their network infrastructure, laying the groundwork for 5G and IoT services. Some have even taken the lead in accelerating service development on top of existing standards, thereby jumpstarting extensions to existing standards and defining new ones with their contributions. Moreover, CSPs continue pushing vendors to disaggregate specialized networking equipment in favor of open architectures.
This final installation in the NFV Report series covers the latest in network functions virtualization (NFV) virtual network functions (VNFs), the workhorse of NFV. These are the actual network functions that provide the desired network services. VNFs benefit from the underlying NFV Infrastructure (NFVI) that hosts these services and provides the appropriate virtualization capabilities as well as NFV MANO that orchestrates and manages the VNFs and the NFV Infrastructure to roll out NFV services.
Networking in the data center has evolved rapidly over the course of the last few years, driven by the rise of XaaS hyperscale cloud providers, as well as the continued virtualization of the data center—first with VMs and now with containers. Further, the uptake of micro-services architecture in applications and increased reliance on big data analytics has changed data center traffic patterns both within data centers and between data centers.
To adequately support the scale and demand of ever-increasing workloads, as well as requirements of business agility and flexibility, data center builders and operators are turning to new networking technologies, including higher-speed connectivity (race to 400Gbps), HW NIC accelerator offloads, software-defined networking (SDN), automation, programmability and intent-based management and orchestration systems. And we’re seeing an interest in highly-programmable merchant silicon coming up as alternatives to FPGAs (Field Programmable Gate Arrays) and NPUs (Network Processing Units), many with support for the P4 programming language.
These technologies are increasingly applied to streamlined data center networking architectures based around 2-3 tier CLOS network topologies with ECMP (equal cost multipath) support, replacing legacy fat-tree architectures. And some innovative technologists are looking beyond Ethernet to PCIe bus switching with NVMe (a storage technology) for flexible and high-performance data center solutions. Concurrent with pursuing increased capacity and speed, network operators are also implementing finer-grained micro-segmentation and compartmentalization of their networks to defend against new waves of security attacks.
Software and hardware disaggregation is driving new product categories for next-generation data centers that include converged appliances, high-density servers, and software-defined storage and networking equipment. Among the hottest areas of hardware innovation are converged infrastructure platforms that integrate hyper- and rack-scale hardware with virtualized software stacks for compute and storage.
Converged infrastructure (CI) is typically a large, rack-scale platform that combines compute, storage and networking into a packaged, turnkey product while Hyper Converged Infrastructure (HCI) usually refers to 1U or 2U (rack-unit) systems that consolidate one or more multi-core servers with a local storage array. The nodes are controlled by a centrally-managed HCI software stack that provides a hypervisor, software-defined storage and virtual networking that pools resources among all nodes in an HCI cluster. Often, these HCI clusters are connected via high-speed flexible networking fabrics to ensure performance. These all-in-one designs have several significant advantages compared to traditional servers: easier to purchase, faster to deploy and manage and potentially lower operating and capital costs.
Both CI and HCI help mitigate the inherent difficulty of scaling performance of monolithic designs. Infrastructure designers have learned that it’s much easier to scale capacity and performance using connected, yet distributed building blocks versus large, monolithic systems. Through their ease of deployment, CI and HCI systems can facilitate rapid growth in cloud environments, while supporting lower OpEx through unified and consistent management consoles. Likewise, the scalability and central manageability of CI and HCI platforms allows organizations to consolidate their data center footprint by aggregating applications onto fewer systems or a single logical pool by migrating applications from smaller data centers and computer rooms into a few large, more efficient facilities.