- Analysts are not employed by SDxCentral.
- Views and opinions expressed in analyst content belong solely to the author and do not reflect the views of SDxCentral, LLC.
- SDxCentral does not fact check analyst content. If you believe there is a factual error in analyst content, please notify firstname.lastname@example.org. Should we find factual irregularities, that article will be unpublished from the SDxCentral website.
Effective April 18th, 2019, the SDxCentral analyst blog syndication program has been terminated.
SDxCentral Statement about AvidThink, LLC
- Roy Chua, the founder of AvidThink, was a co-founder of SDxCentral. As of September 30, 2018, Roy is no longer affiliated with SDxCentral.
- The views expressed by AvidThink and Roy Chua are independent of SDxCentral and do not represent the views or journalistic principles of SDxCentral.
- As of April 18th, 2019, SDxCentral is no longer publishing AvidThink analyst blogs on the SDxCentral website.
Standardization on x86 commercial off-the-shelf (COTS) servers within the data center has been a movement for some time because the architecture offers versatility, cost-savings, easier integrations, more attractive maintenance and management profiles, and, overall, a lower total cost of ownership than a proprietary hardware approach. But there are new requirements that are driving data center server choices these days, namely the need to support carrier virtualization, programmability, and the massive data sets that come with machine learning and advanced, real-time analytics.
Network function virtualization (NFV) and software-defined networking (SDN) in particular have started to take hold in the data center in real ways, and the underlying hardware layer has become abstracted from the intelligent software running above.
“Data center choices are based around the idea of standardization, and the goal is to move all of the intelligence into the software layer, which has moved us away from hardware specialization,” explained Ashish Nadkarni, program director within IDC’s worldwide infrastructure practice. “Everyone is using industry-standard silicon instead of customized chips so that servers can be put to multiple uses by simply replacing the software stack—whether it’s network, storage, or applications, you can use a similar type of server. The apps themselves are virtualized and move from server to server via the orchestration layer.”
In theory this should bolster the COTS trend. Big I/O performance requirements in some use cases have pushed users down a path of adding hardware specialization to the mix because the decision to use pure “white box” strategies brings with it trade-offs—some of which are unacceptable for some next-generation network architectures.
“When you embrace general-purpose hardware, it’s not geared for performance,” said Nadkarni. “That standard CPU is like a Swiss army knife—it does a lot, but it’s not best-of-breed for anything. Users are generally OK with this because you can get better efficiencies. But the performance isn’t there to support NFV or cognitive computing.”
The Need for Performance Enhancement
NFV and SDN may be the most visible workloads straining the limits of available COTS server performance, but a range of next-generation workloads are also demanding big gains over traditional data center architecture: The large data sets and dynamic, real-time application portability found in big data analytics implementations; artificial intelligence (AI), machine learning, and cognitive computing environments; and security. All of these require heightened I/O performance to provide an optimal end-user experience, meaning that servers must be high-performance, low-latency, and cache-coherent.
“Telcos and large service providers have very different requirements for packet processing than what we see in a typical data center,” explained Cliff Grossner, senior research director and advisor for the Cloud & Data Center Research Practice at IHS Markit. “Many applications in these environments are packet-processing apps. When we get to next-generation offerings and networks, they need strategies for accelerating and enhancing the performance of that data packet processing plane.”
He added that it’s not a question of if, but when AI technologies become a necessary part of the mix.
“Code processors for analytics and AI applications will be used heavily in telco and NFV down the road,” Grossner said. “Telcos and others will use telemetry to improve the user experience and spur end-user autonomy, and those algorithms can strongly benefit from acceleration. We believe that as a matter of evolution, servers will have more and more core processors in them that can do processing on many points of data at one time, which speeds up the repetitive calculations.”
Tweaking the Server
Some approaches to supporting these new models involve tweaking general-purpose CPUs with the Data Plane Development Kit (DPDK) kit to optimize them for advanced use cases. That’s an approach favored by Intel, which involves a programming framework that optimizes the data path for applications to communicate with the network interface cards (NICs). In doing so, it enables them to process data packets faster, which is especially beneficial for applications that handle a substantial amount of Ethernet packet processing or high message rates, such as virtualized network functions (VNFs).
Another option is to implement smart NICs with hardware acceleration, offered by vendors like Mellanox, Cavium, and Netronome. These are effectively SoCs on a NIC and allow servers to run the packet forwarding plane on the cards, thus offloading that workload from the CPU, freeing up resources and improving performance.
“Anywhere from 10 to 15 cores can be taken up by packet processing alone, so with programmable NICs, that leaves the CPU free to do other things,” said Grossman. “These can be put into a white box or any other server.”
Nadkarni added: “This keeps the CPU pretty standard, but you’re augmenting the server with custom hardware as a bolt-on.”
Open-Source Efforts Grow
The other element that will be important in the evolution of server strategies is the continued maturation of open-source.
The servers of the future will likely have multiple accelerated processing elements attached via high-speed fabrics, which will drive the need for standards. To that end, OpenCAPI (where CAPI stands for Coherent Accelerator Processor Interface) was launched last fall as a new interface standard to enable very high performance accelerators like FPGAs, graphics, network, and storage accelerators to talk to servers in a high-performance way. OpenCAPI has a 25 Mb/s data transfer rate, compared to earlier efforts like the PCIe 3 approach, which offers just 16 Mb/s.
OpenCAPI and other open standards may then be implemented in a specialized configuration. For instance, Rackspace and Google recently put on the first public display of the Open Compute Project-OpenPower server platform announced last year, code-named Zaius and Barreleye G2. Zaius will double CPU performance and memory capacity, and will quintuple peripheral bandwidth, compared with the prior OpenPower system. It relies on OpenCAPI to tie together GPGPUs, FPGAs, and other specialized processors, as well as make them faster and easier to program.
With advanced use cases showing no sign of a slowdown in adoption, balancing data center performance with standardized server approaches that drive efficiencies will continue to be a challenge. For instance, it’s possible to boost performance with enhancements like wide-memory buses, large caches, and high-frequency, high-wattage processors, but at a certain level, these will require liquid or cryogenic cooling systems in the data center—a significant and expensive physical footprint shift. Open source standards will be the incubation chamber for solving this issue and others as the industry moves forward.
“These are important initiatives to keeping carrier NFV healthy and providing a way for vendors to interact in new ways,” Grossner said. “It’s important moving forward to continue those ecosystems.”