- Analysts are not employed by SDxCentral.
- Views and opinions expressed in analyst content belong solely to the author and do not reflect the views of SDxCentral, LLC.
- SDxCentral does not fact check analyst content. If you believe there is a factual error in analyst content, please notify firstname.lastname@example.org. Should we find factual irregularities, that article will be unpublished from the SDxCentral website.
Effective April 18th, 2019, the SDxCentral analyst blog syndication program has been terminated.
SDxCentral Statement about AvidThink, LLC
- Roy Chua, the founder of AvidThink, was a co-founder of SDxCentral. As of September 30, 2018, Roy is no longer affiliated with SDxCentral.
- The views expressed by AvidThink and Roy Chua are independent of SDxCentral and do not represent the views or journalistic principles of SDxCentral.
- As of April 18th, 2019, SDxCentral is no longer publishing AvidThink analyst blogs on the SDxCentral website.
These days there’s clearly a massive amount of interest in all things relating to network function virtualization (NFV). But based on historical trends, the adoption of high-performance hardware can boost NFV adoption by providing a stronger platform for applications.
The history of computing is one balanced between hardware and software. Time and again, hardware advances have proven to be a boon to software, because the hardware innovation can mitigate the overhead introduced by new software. NFV is not likely to be any different. As SDxCentral has been covering as part of its Business Insights series, virtualization introduces a performance penalty that must be solved with hardware.
“When you introduce network virtualization on top of the same physical switches in place you gain flexibility,” says Cliff Grossner, an industry analyst with IHS Technology. “But you also take a performance hit.”
To compensate for that performance hit, Grossner says more aggressive rollouts of network virtualization platforms in production environments will naturally occur as more robust hardware platforms to run this software become available.
“History has already shown that some virtualization functions always wind up getting embedded in silicon to improve performance,” adds Grossner.
Virtualization Looks for Speed
The good news it there are now a raft of hardware technologies that promise to boost NFV deployments and help build out more robust software-defined data centers (SDDCs). This could benefit data centers for service providers and enterprises alike. A recent report released by Dell shows that 80 percent of IT and business decision makers are in the process of making the transition to an SDDC architecture. Vendors of all sizes are aiming to accelerate that shift via a wide variety of upgrades to existing network infrastructure.
For example, Netronome has announced that it is now making a 25G Ethernet version of its adapter that offloads the processing of virtual switches such as Open vSwitch (OVS) from the server. With samples scheduled to be available in September, Nick Tausanovitch, vice president of solution architecture and silicon product management for Netronome, says the goal is to make it simpler for organizations to embrace, for example, network function virtualization (NFV) applications without having to use up processors cores on a server to run it.
“There’s a 20X gain in efficiency when you offload OVS from the server,” says Tausanovitch.
For similar reasons Array Networks has created an AVX series of appliances for offloading application delivery and security functions from a server.
Hybrid Hardware Approach for NFV
Paul Anderson, director of marketing for Array Networks, says that while Array Networks makes it possible to run its software on a server or on a dedicated appliance, the simple fact is that dedicated hardware makes it simpler to guarantee network service levels.
Rather than getting caught up on philosophical debates, Anderson suggests most organizations would be better off coming to the realization that they will wind up deploying advanced networking services on dedicated hardware as well as on commodity servers. On a public cloud, for example, deploying software may be the only real option, while on premise an IT organization is going to want higher performance to meet the demands of next-generation networking software.
“There’s always going to be a performance penalty for virtualization,” says Anderson.
Obviously, many providers of network virtualization and software-defined networking technologies are anxious to get their wares deployed sooner than later. But most networks today consist of a mix of physical switches and routers from different vendors that are already maxed to capacity.
In fact, much of that networking gear is more than three years old. Networking virtualization injects a much need layer of agility into those environments that makes it possible to networks at a higher level of abstraction. But the simple fact of the matter is that all that networking virtualization software needs to run somewhere. For every CPU cycle allocated to running network virtualization software there is just that much less horsepower to process packets.
While the organizations as a whole may be excited by the prospect of their networks becoming easier to manage; almost none of them are going to be willing to sacrifice anything in the way of application performance to actually achieve it.