The data center fabric, generically, is the layout of how the computing, networking, and software components of a data center work together to deliver IT services. It is also the name of Facebook’s specific data center plan called “data center fabric.”
The challenge in building large data center fabrics, whether it’s Facebook’s or other large data centers, is that it must scale as one logical, high-performance entity. This requires the ability to plug in modular elements and be able to “scale-out” in rapid fashions as new resources and networking elements are needed.
Different Architectures for Different Folks
A fabric itself, derived from switching technology, is the layout of how connections are made in a matrix of switches and servers. The term fabric comes from the crisscross nature of a networking fabric. When data centers become larger, this matrix of connections becomes larger, with many different architectural approaches.
Modern data centers are for more complex than legacy corporate IT networks and the network architectures are constantly evolving, depending on the size of the data center and the applications. The basic nature of the fabric, however, is how all of the servers and switches are connected to one another and communicate through networking hardware and software.
One of the basic problems in any data center is the total number of connections, how the traffic is flowing, and the most efficient way to connect the different elements. This poses challenge on a hardware level as well as the physical cabling and conceptual level. It’s important to know how the data center applications will interact with one another and how much data will be exchanged in which patterns.
For example, many of the connections may end up just managing other switches, or sending traffic back and forth between servers. The promise of software-defined networking (SDN) technology is that it will be intelligent enough to figure out traffic patterns and how to best connect a network to handle these changing traffic patterns.


Scaling to New Levels
Many terms are applied to the different pieces of a data center fabric, including Facebook’s, which may include a top-of-rack (ToR) switch which manages other switches in the rack and then connects to a larger device, a spine switch, which connects out to metro networks, or “aggregation points.” This layout, known as “leaf-spine,” eliminated a layer of switches in between, which are more common in older enterprise networking designs.
The common theme is that as data center traffic gets funneled into the network, it gradually works its way into larger and larger pieces of equipment to a backbone of sorts – either the company’s own large global optical network, or that of a service provider. Much traffic in a data center may not actually leave the data center at all. This is known as “east-west” traffic, which consists of applications and servers talking to one another inside the data center – or to adjacent data centers – rather than “north-south” traffic which is making its way into an optical backbone to go some place far away.
The largest data center fabrics, including Facebook’s, are becoming even more modular and scaleable to accommodate the massive requirements of today’s data center, especially for “webscale” architectures. The new data center fabric must take into account how to handle terabits of data and ever-increasing connectivity needs.
Additional Data Center Fabric Resources:
What is Network Switching Fabric?
Facebook’s Pod People Debut New Data Center Fabric
Facebook’s New Data Center is Bad News for Cisco
SDxCentral 2017 Next Gen Data Center Networking Report
SDxCentral 2017 Network Virtualization Report: SDN Controllers, Cloud Networking and More
SDxCentral 2015 SDN & NFV Market Size and Forecast Report
Mini Reports: Inside Facebook Wedge and Inside Google’s Data Center
LinkedIn Built Its ‘Pigeon’ Switch to Peer Into Networking Chips
Big Switch, Facebook, & NTT to Demo an Open Source Switch OS