When talking about software-defined networking (SDN) and network functions virtualization (NFV), we often talk about software that can run on x86 servers. The term x86 refers to instruction set architectures for silicon chips. Most x86 servers that run SDN and NFV software use chips from either Intel or AMD. But that could change as ARM is making a concerted effort to take market share in this segment.
ARM doesn’t make chips. It creates the intellectual property (IP) building blocks for a chip. And while x86 refers to an instruction set architecture, ARM also has a set of defined instructions. But instead of manufacturing chips, ARM gives the instruction set architecture to its customers — such as Qualcomm, Broadcom, NXP, and Cavium — and those companies manufacture the chips.
Bob Monkman, director of software strategy for network infrastructure at ARM, said the company refers to its products as ARM-based systems-on-a-chip (SOCs). He explained that when ARM’s customers make the chips, they often add other processing engines for specific tasks. For example, they might add engines for networking applications or for video applications. He said the specialized engines can work fast with low energy in these integrated SOCs.
“Our bread and butter for years was mobile phone chips,” said Monkman. “That’s where we made a name for ourselves.” The company makes three categories of chips: “A,” “R,” and “M.” The R and M categories include chips that are used for devices such as mobile phones. These fixed function chips are simpler than its “A” chips that are used in network servers.
At last week’s Open Networking Summit (ONS) show in Los Angeles, Monkman said, “This is an ‘A’ show for us. We decided roughly eight years ago we were going to expand our markets. We want to get into servers and networking infrastructure.”
Monkman made a case for why ARM-based SOCs are a compelling choice for networking servers. He said the company’s “secret sauce” IP provides improved performance per watt (PPW) and performance per area (PPA). “We can lay down cores in a much smaller area and pack more cores into a same-size die,” he said. “The amount of compute you can do in a physical space is greater, typically twice the density of x86.”
In addition, some performance improvement comes from the above-mentioned fact that ARM’s customers build the processing engines for specific tasks into the chip. “With ARM, it’s typical that everything is put on a single die because the customer is going to put the actual chip together,” he said. “An Intel server would have a pure server compute machine, and they would plug in a specialized engine on a separate card.”
ARM in the Central Office
ARM is emerging as an interesting platform for lots of NFV and edge deployments due to its lower-power and cooling advantages over x86. And ARM is now part of the continuous integration (CI) framework for OPNFV.
Monkman indicated that in addition to data centers, the telco central office is a target market for ARM. Telco operators have thousands of existing central offices, and they’re building new ones based on software, for example, the Central Office Re-architected as a Data Center (CORD). “We think we can compete in this space,” he said. “With 5G they have to get 100 times more bandwidth with 30-50 times lower latencies. In order to do that they have to have a much more efficient compute architecture. We can get you more compute out of the same physical real estate.”