Software-defined networking (SDN), network functions virtualization (NFV), overlays, and other forms of network virtualization have created the possibility for great leaps forward in feature velocity, performance elasticity, scalability, and capex/opex savings for networks worldwide. A centerpiece of this new era is the consolidation of multiple, diverse, and possibly customized network equipment systems onto standard and virtualized platforms, where network functionality is implemented and linked via standard protocols.
With these benefits, it is obvious why service providers would want to move toward a virtualized environment. The challenge, however, is that in truly virtualized environments, the capabilities of embedded processors—including acceleration, I/O blocks and other integrated peripherals—are harder to capitalize upon.
Even today’s multicore communication processors that support partitioning of chip resources—a basic form of virtualization—are challenged to allocate these resources in a way that’s easy for designers to use and sufficiently dynamic for future environments where virtual machines fluidly spawn, die, and migrate among hosts. But ultimately, foregoing these resources and implementing all functions in software will sacrifice performance and cost savings.
How Processors Can Help
In order to achieve these benefits, future multicore processors must continue to excel at what today’s best chips do, including:
- Securing network traffic by accelerating functions such as encryption and deep packet inspection.
- Accelerating protocol processing, offloading ever more functions from the general-purpose CPUs so they can run more virtual machines.
- Delivering high-performance (but not necessarily maximum performance) general-purpose computing capability.
- Integrating accelerators and CPUs with flexible, standard I/O interfaces.
- Delivering all of this in a power-efficient package.
They must also make several important improvements:
- Virtualize CPUs and peripherals in way that is easy for hypervisor coders and system designers to use, by providing not only abstract representations of these on-chip resources, but also abstract representations of how they are controlled.
- Use CPUs based on genuinely standard instruction sets—those that are implemented by multiple vendors—to facilitate the shared development model of open source and to minimize vendor lock-in.
- Implement protocol acceleration in software for maximal flexibility and programmer productivity.
- Ship with extensive vendor-supplied libraries, APIs, and helper applications—all well documented—to minimize designers’ time to market.
- Provide flexible options for high-speed connectivity among virtual machines and to the outside world.
- Support advanced debugging features so that even in the most complex designs, the hairiest bugs, can be squashed.
With these improvements, embedded multicore processors will meet the challenge of providing optimum performance while juggling virtual machines. The balance processor suppliers must strike will be between standardization and differentiation. Too much emphasis on standardization and the industry will stagnate. Too much differentiation—or at least exposure of low-level chip differences to system developers—and the processor is no longer easy to use.
Even worse, OEMs and their customers could coalesce around vendor-proprietary technologies masquerading as open standards. This is unlikely, given the industry’s cognizance of how the PC industry’s Wintel model provided initial benefits from standardization but led to limited choices, high costs for critical technologies, and mediocre and undifferentiated systems.
The network virtualization era has set into motion a new processor race where ease-of-use and efficiency are as important—if not more so—than raw performance and core count. From a high level, future multicore chips look much like today’s designs, providing CPUs, accelerators, and interfaces. The developer that works with them closely will find that much of their complexity is hidden, they are more flexible, and they are much better at virtualization. These improvements will enable the promises of network virtualization to be realized.