To meet the new requirements for the next gen data center, we’re seeing major data center networking evolution, starting with the major cloud providers, but with trickle-down impact on regular enterprises and service providers as well.
Historically, the data center network consisted pure of physical routers and switches, with an active-passive (or active-active in more advanced) design for traffic flow through the data center. Most traffic was north-south, with external requests coming into the data center to clusters of load-balanced application servers which would then process the data and send information back to the external requester.
Furthermore, in this type of architecture (sometimes termed a Fat Tree because of the high-BW requirement at the core), half of the network was active and often the other half was on standby, ready to take over in case one of the links or network devices failed. Spanning-Tree-Protocol (STP) would be used on the network to ensure there were no loops in the overall switching infrastructure. And historically, the access layer tended to be L2-centric, with L3 routing taking place only at the core.
In the new cloud-centric world, and taking into account SDN and NFV evolution, we are seeing the definition of boundaries around networking expand, along with the expansion of data centers into new locations (such as remote points-of-presence like telco central offices). The NGDCN now looks more like the following, starting at the virtual switch within the servers themselves, and going out to the core switches and the data center interconnects (DCI) between geographically disparate data centers.
Key Components in Next Gen Data Center Networking