SAN JOSE, Calif. — Facebook announced new hardware for artificial intelligence (AI) and Microsoft open sourced Project Zipline for cloud-scale data storage and management today during the opening keynotes of the Open Compute Project (OCP) Global Summit.

OCP also announced four other new projects today: Inspur-led OpenRMC (stands for “rack management module”), Netronome-led Open Domain-Specific Accelerators, Facebook-led OCP Accelerator Module, and Nokia-led Open Edge.

Project Zipline

Project Zipline is essentially a more efficient way to manage and store data at scale. It’s a new compression standard for legacy data sets, like application and system logs, as well as more modern data sets such as IoT, telemetry, and sensor data. The standard also takes into account the software and hardware being used, and it enables a higher compression ratio versus the commonly used model without losing high throughput and low latency, according to Microsoft.

“It achieves all of these three vectors together — that is what is unique about this,” said Kushagra Vaid, general manager for server engineering in Microsoft’s Cloud and Enterprise division. “This is the first time to my knowledge that we are able to achieve compression without compromise.”

In addition to contributing the compression algorithms to OCP, Microsoft also open sourced hardware design specifications and Verilog source code for register transfer language (RTL). Initial content is available today, and Microsoft promises more soon. Releasing RTL will allow OCP hardware vendors to use the reference design to produce chips that enable higher compression with lower power use.

Vaid said chipmakers including Intel, AMD, Ampere, Arm, Marvell, and SiFive are already developing CPUs that use Project Zipline. It’s also working with networking companies Broadcom, Fungible, and Mellanox Technologies, along with storage vendors Eideticom, NGD Systems, and Pure Storage on commercial projects that use the new compression algorithm.

Facebook’s AI Hardware

Also in an opening keynote, Facebook’s Vijay Rao, director of technology strategy, announced new hardware and software focused on AI. This includes a hardware platform for AI training, code-named Zion; Kings Canyon, which are new custom chip designs (or application-specific integrated circuits) optimized for AI inference; and Mount Shasta, for video transcoding.

The Zion platform decouples memory, compute, and network intensive components, allowing each to scale independently, according to Facebook. It also uses Facebook’s new vendor-agnostic OCP accelerator module (OAM), also contributed to OCP today, for memory bandwidth and compute-intensive workloads. Facebook says hardware partners including Intel, AMD, Habana, Graphcore, and Nvidia are developing products on top of the OCP common specification.

Facebook is also working with Esperanto, Habana, Intel, Marvell, and Qualcomm to develop ASICs for AI inference using the Kings Canyon specification.

Photo: Kushagra Vaid, general manager for server engineering in Microsoft’s Cloud and Enterprise division, gives a keynote at OCP Global Summit.