The threat of white box switches has cast a shadow over big network equipment vendors like Cisco, HPE, and Juniper for years, as the combination of Moore’s Law and economies of scale provides merchant switch silicon an overwhelming price/performance advantage over proprietary platforms. Announcements at this year’s Open Compute Summit point to a future where SDN and network virtualization mix open software with merchant silicon.
The roadblock has been two-fold: an entrenched installed base of inherently conservative network engineers reluctant to abandon years of hardware expertise and a lack of switch software with the features and management hooks to credibly deliver next-generation network virtualization. While human inertia will always be hard to overcome, as announcements at this year’s Open Compute Summit demonstrated, freely available switch software is using the open-source playbook to close the feature gap with proprietary products.
Battling the SDN Establishment
SDN was supposed to be the innovation that toppled legacy switch platforms. However, whether due to technological immaturity, vendor in-fighting or a confusing value proposition, it hasn’t worked out that way. So far, established hardware and software vendors including Cisco, Juniper Networks, and VMware have dominated the emerging SDN market. Our recent user survey data and reports confirm this.
It could be that the marketing power of established vendors and their installed base has been underestimated. Writing about Arista recently, Pacific Crest analyst Brent Bracelin nailed the difficulty that cheap, commodity-powered switches with SDN stacks have had in gaining market share:
“2016 is the fifth consecutive year that the threat of software-defined networking (SDN) in some form or fashion was said to be on the verge of derailing Arista Network’s promising growth trajectory as a leading switch supplier into six of the seven largest cloud titans. Despite all the pomp and circumstance, the SDN disruption risk has proven to be more bark than bite.”
As the PC business proved, commodity hardware isn’t enough: Intel was nothing without Microsoft. With server virtualization, robust commercial software platforms from the likes of Citrix, Microsoft, and VMware helped push the market forward.
But the next wave of merchant silicon may give a white-box approach to SDN another push forward. Microsoft recently introduced and contributed SONiC (Software for Open Networking in the Cloud) to the Open Compute Project hoping to create a standard hardware-software stack for virtualized, scale out networks using a centralized controller. When paired with its previously submitted Switch Abstraction Interface (SAI), which decouples and presents common switch ASIC networking functions as a standard API, SONiC, which runs on Debian-based Open Network Linux, provides a modular software platform for building network functions and services that work on all supported switch hardware. According to the lead architect for Azure networking, “SONiC brings together all the building blocks to form an open sourced, fully functional, secure, and reliable cloud switch.”
SONiC and Merchant Silicon Integration
Microsoft, which is undoubtedly already testing SONiC to deliver various Azure network services, has already signed up four hardware partners: Arista, Broadcom, Dell and Mellanox. Indeed, at the OCP Summit Mellanox demonstrated the SAI-compliant 10/25, 50, 100 Gbps Spectrum switch and ConnectX-4 Lx adapter cards running OpenSwitch. The switch also supports Cumulus Linux, ONL/SONiC, Metaswitch Networks and the proprietary MLNX-OS. Mellanox has jumped on what it calls composable networks that can be “disaggregated into functional independent modules with open, clearly defined abstraction interfaces.” In this model, Mellanox builds what it does best, very fast switch silicon and hardware, while others, typically open source projects, provide the core OS and APIs that together allow enterprises and ISVs to develop virtual network services both at the packet switching (SDN) and application (NFV) layers.
Although not yet part of the SONiC ecosystem, Cavium also demonstrated merchant switch silicon running OpenSwitch at the OCP Summit and announced a corresponding set of APIs that it calls OpenXPS (the XPliant Programmable Switch) for open source applications. The OpenXPS APIs abstract the existing XPlaint SDK to allow porting other control plane operating systems (perhaps including ONL/SONiC?) to the platform and developers to add higher-level virtualized services. Like the Mellanox hardware, Cavium’s XPliant switch support 10/25, 50 and 100 Gbps interfaces in various combinations up to a non-blocking 3.2 Tbps throughput (e.g. 32×100, 128×25).
The other significant merchant switching news at OCP came from the project’s founder, Facebook, which gave an update on its second-generation switch, the Wedge 100. Facebook revealed that the switch uses Broadcom’s Tomahawk that, like the XPliant, supports up to 32×100 Gbps and will enter mass production later this year. The Wedge runs Facebook’s FBOSS software that, while open and programmable, doesn’t support SAI and is instead tied to Broadcom ASICs using its OpenNSL APIs. Currently, Facebook is focused on aggregating multiple Wedges into a spine switch and using BGP to build a redundant fabric to leave Wedge devices. The Wedge is still targeted at Facebook’s environment and although it didn’t reveal any network services running on the switch, the architecture is extensible and Facebook says it’s “committed to continuing FBOSS development in the open and working with the community to build a rich set of applications that can run on OCP networking hardware.”
OCP Merchant Silicon Takeaways
The common theme in these OCP announcements is that the combination of merchant silicon, powering extensible white box switches and open, Linux-based network stacks are emerging as the substrate for network virtualization, management software, automation/orchestration interfaces (APIs) and higher-level application services. The OCP announcements were understandably targeted at hyperscale cloud operators needing massive scalability and building centrally controlled, scale out networks with 25/50/100 Gbps leaf connections. While overkill for many enterprises, these technologies will percolate down to more appropriate scale and prices. The fusion of cheap hardware and open software could be the catalyst for a round of network upgrades and new SDN and NFV deployments.