When people talk about the drivers for SDN (and network virtualization for that matter), they tend to discuss the trend in comparison to compute and storage. The narrative usually includes some coverage of the dynamic nature of workloads in the datacenter, highlighting the need for orchestration of resources in a highly dynamic environment. It is this dynamism that introduces changes in the network, and change is the primary driver behind operational expense. So it goes that SDN will allow for automated change, thus driving down costs and allowing for more interesting things to be done.
Amidst all this workload mobility and the promise of an evolving infrastructure to handle it is an interesting phenomenon. SDN, overlays, and network virtualization make possible a virtual environment that can morph to meet the needs of the applications running on the network. But these are built on a network that is still fundamentally static in nature. Even with all of this change, the network (and more specifically, the transport capacity that it provides) is still limited by the physical cabling that interconnects devices.
In essence, the capacity requirements of the network are neither uniform nor static. As users and applications access network resources, they drive new workloads that will vary in what they need. Some flows might be particularly long-lasting and bandwidth-heavy (as with major backup utilities or disaster recovery applications), while others will tend to be short and bursty. The question is really how to satisfy these needs in a modern datacenter.
There are basically two schools of thought here: everything in aggregate vs. a tailored experience. The aggregate camp believes that, when measured over significant enough sample, traffic is uniform and random. In this model, it is impractical or impossible to predict how flows will traverse the network, so the best that anyone can do is provide enough aggregate capacity, distributed equally over the network to allow for any-to-any transmission. Given the static nature of available capacity, this is a sensible model; after all, even if you knew where you needed additional bandwidth, aside from recabling the datacenter, there is little you could do anyway.
If the first school of thought is bandwidth everywhere, the second is more about how to manage the physical fabric. Optics have been a part of network transport for a long time, but their cost has relegated them almost exclusively to the carrier space. When most datacenter traffic was originating across 1GB server links, the capacity requirements simply did not justify the expense of optics. But the transition to 10GE along with the heavy virtualization of servers has increased the need for bandwidth in the datacenter, and the emergence of new technologies in the optics arena has changed the price points.
Entire books have been written on how optical transport works (and I won’t try to re-create that here), but there are basically three things you need to know:
Optics is exactly what it sounds like: light. This means that data transmissions will track the speed of light, offering the lowest-possible-latency between directly connected nodes.
Photonic switching does not require intermediate electronic involvement. In other words, if light is traveling from node A to node C, all B has to do is pass that stream along. Since it is just light, this can be done with mirrors, which means that the presence of B is completely transparent in the transmission. In other words, the path from A to C is a single-hop path.
The transmission across fiberoptic cables involves multiple optical waves (frequently called lambdas or colors). When a node is optically connected to other nodes, it can be programmed to receive specific lambdas. Light that does not match is photonically switched (no electronic conversion, so no hop); light that matches is terminated on the node and converted for electronic signaling.
The speed of light combined with bridging paths across multiple nodes makes optics particularly well suited to transmit large amounts of latency-sensitive network traffic. Where performance is critical, optics become an intriguing alternative to more traditional, statically-cabled datacenter architectures. And the fact that these paths can be programmed means that optical capacity can be dynamically allocated as and when they are needed.
The challenge is how to integrate these photonic switching capabilities with existing packet-switched datacenter networks. It is unnecessary to have all traffic traverse an optical network just for the sake of using optics, and it is impractical to require a complete rip and replace to take advantage of optics. Real-world deployment scenarios outside of greenfield opportunities will require a hybrid packet-optical architecture.
Even with a hybrid architecture, the question is how traffic gets selected for transmission on optical paths. Enter SDN.
SDN’s control and forwarding plane separation creates central control points that are particularly well suited to view the entire network as a system and make intelligent pathing decisions. If the SDN controller can detect bandwidth-heavy or latency-sensitive flows, it can dynamically offload those to an optical bypass, assuring that flow guaranteed capacity while sheltering the rest of the network from the congestive impacts of the traffic.
In this type of scenario, the challenge is both in identifying the flows for offload and then steering them through the bypass. This requires integrating a packet-switched network, an optical fabric, and a monitoring or detection application that can exchange information with the controller. The individual technology elements all exist, but they must be used in concert to get the full benefit.