- Analysts are not employed by SDxCentral.
- Views and opinions expressed in analyst content belong solely to the author and do not reflect the views of SDxCentral, LLC.
- SDxCentral does not fact check analyst content. If you believe there is a factual error in analyst content, please notify firstname.lastname@example.org. Should we find factual irregularities, that article will be unpublished from the SDxCentral website.
Effective April 18th, 2019, the SDxCentral analyst blog syndication program has been terminated.
SDxCentral Statement about AvidThink, LLC
- Roy Chua, the founder of AvidThink, was a co-founder of SDxCentral. As of September 30, 2018, Roy is no longer affiliated with SDxCentral.
- The views expressed by AvidThink and Roy Chua are independent of SDxCentral and do not represent the views or journalistic principles of SDxCentral.
- As of April 18th, 2019, SDxCentral is no longer publishing AvidThink analyst blogs on the SDxCentral website.
The trend of virtualization is one of the most powerful forces in technology. Using data abstraction and control, the functions of compute, networking, and storage are being virtualized to create a more flexible software platform for delivering digital workloads and services to customers.
The vision is that, eventually, nearly all digital and computing services will be controlled by open and compatible software interfaces, automating the orchestration of resources behind the scenes – often from the cloud. Ordering up and controlling services ranging from cloud computing to virtual private networks will be as easy as pushing the buttons on your TV remote.
This virtualized infrastructure is bringing enormous benefits to the technology world, but nobody said it was easy. This massive, worldwide migration of control from the physical to the virtual is a complicated process. Decades of procurement, administration, and business processes are being changed.
The No. 1 focus of IT management is virtualization. In Protiviti’s “2015 IT Priorities Survey,” 86% of those surveyed labeled it a “significant priority.” This survey had more than 1,000 respondents including CIOs, IT vice presidents, and IT directors. The second highest priorities were security related, with 83% citing virus and advanced threat eradication as a significant priority; data breach and privacy laws also came in at 83%.
All of these issues are connected, of course. Security and virtualization go hand-and-hand in the cloud (see “Why Security Is a Leading Network Virtualization Driver”). In the legacy, physical world of computing, resources could be cordoned off by physical or software firewalls. But as the world’s computing power is migrated to the cloud and connected to the Internet, security must become more virtual as well.
Computing power has been transformed by server virtualization and the cloud model. The next step is to extend that to network virtualization (NV). In our recent “2015 Special Report: Network Virtualization in the Data Center,” our research and user survey generated a lot of information about the operational challenges of NV. Below are the some of top operational challenges that IT managers should watch out for in their shift to NV.
When asked to identify the top two attributes organizations are looking for from NV solutions, scalability got a majority vote in our research – from 51% of those polled. Some NV solution may have performance challenges when the control planes reach their limits or when the data plane replicates broadcast, unknown unicast, or multicast traffic across the physical fabric. Vendors are working to overcome these limitations, but whether the solutions can scale to support large cloud environments with tens of thousands of physical servers is yet to be seen.
Openness of Solution
In our survey of the top two attributes of NV, openness and interoperability came in second at 39%, which speaks to the heterogeneous nature of most organizations and their need to ensure the solutions work together.
Look for NV solutions that support standard protocols to improve compatibility with other network elements. VXLAN is the most popular encapsulation protocol today. On the direct-fabric programming side of the equation, the OpenFlow protocol is the most prevalent southbound protocol supported by switches, enabling the NV solution to have a wider reach. Cisco’s proprietary ACI has also experienced strong market adoption.
Usability and Manageability
Ease of implementation and performance came in at 36% and 35%, respectively, in our survey of most important attributes. Usability and manageability are key considerations. The control and management panels for these solutions have to accommodate tens of thousands or hundreds of thousands of networks (perhaps millions in the near future). This unprecedented scaling presents a new challenge for user interfaces.
Another version of openness: Virtualization customers need to make careful distinctions among the many hypervisor solutions out there and determine whether they work together. Some solutions are marked “compatible” with the hypervisor, but they might be using a virtual switch instance running in a virtual machine (VM). This is true with VMware’s ESXi, which today restricts choices of virtual switches to its built-in version or limited partners. These issues are being solved with more workarounds, however, and non-VMware vendors are beginning to improve VM performance issues.
Choosing the network model can have huge architectural implications. Look for a mature NV solution that provides a network model that helps visualize and define the topology – including Layer 2, Layer 3, routing, gateways, Layer 4-7 chaining, and so on. The more complex the environment, the more critical the model is to ensuring it is working properly.
Physical Switch Compatibility
NV solutions try to operate primarily within the virtual domain, but most still require some integration with the physical infrastructure. The availability and performance of physical-to-virtual gateways is critical. For NV solutions using VXLAN, there is an increasing number of switches that can act as hardware VTEPs (VXLAN termination endpoints) to facilitate these boundary transitions at wire-speed.
Many networks are jitter- or latency-sensitive, such as those supporting streaming media, voice, or critical apps (e.g., financial and medical), where the presence of QoS support (Layer 2/Layer 3) can be helpful. In such environments, NV solutions based on direct-fabric programming may be able to provide better QoS control than pure overlay solutions.
Visibility and Analytics
Visibility is critical for network troubleshooting and management. The ability to look into overlay tunnels and understand how they traverse a particular physical path is key to optimizing the network’s performance.
NV vendors tout monitoring and analysis within their solutions, with the ability to analyze traffic trends (throughput, latency) within the solution itself. However, most of these implementations are relatively basic, focusing on data capture. Look for capabilities to mature in the coming months to improve analysis.
These are the most critical issues we found when examining the operational approach to NV.
As you can see, the list of operational challenges is large. Our 2015 paper found many other critical issues to look out for, including Layer 3 routing capabilities, orchestration stack integration, support for containers, cross-data center capabilities, network services, service chaining, automation, and programmability.
For more detail, see our “2015 Special Report: Network Virtualization in the Data Center,” which is available online or as a PDF download for our community members.