I recently attended an Optical/SDN conference in NY while I also moderated an SDN user panel at another conference in Silicon Valley. In attending such conferences, I always look forward to learning how traditional Tier 1 service providers (e.g. Verizon) and content providers (e.g. Google) utilize or plan to utilize SDN to address major operational pain and cost points in their networks. For example, major content providers speaking at these conferences have already begun to utilize internally developed software-based load balancing and security applications within the SDN framework. As an example, one major content/hosting provider told me at one of these conferences that they no longer add appliance based load balancers to one of their network services as they have developed and utilize their own internally developed virtual load balancer. The virtualization of some basic functionality typically found within security and application delivery controller appliances, is now an initial use case within the SDN framework by content providers like Amazon, Azure, Facebook, and Google.
When it comes to service provider wide area connectivity, however, it was striking to me to hear that Google has achieved up to 90% utilization on their data center to data center WAN links through the use of an internally developed SDN controller, merchant silicon based 10GE switches and network optimization application software, while Tier 1 carriers like CenturyLink and Verizon continue to face low optical network WAN utilization rates, which I estimate to be 25% or less on average. So is SDN controller and network optimization software the solution to the Tier 1 carrier low optical WAN utilization woes? I do not think it is that simple. While SDN and software algorithms are “Mas Macho” in the technology, VC and stock market worlds these days, it is not likely they can solve the Tier 1 carrier low optical transport utilization problem alone. The solution is likely to take mastery in both software and optics.
Certainly software centric content companies like Google have significant resources that allow them to develop SDN based network optimization software to achieve such high WAN connectivity rates, but they also have the advantages over traditional telecom operators in the type of traffic they carry across their WAN. This becomes a bit more clearer when one looks deeper at Google’s backbone, as it operates two logical backbones, G-Scale and I-Scale. G-Scale is the logical backbone that uses a global view SDN controller, merchant silicon based 10GE non-blocking switches and internally developed traffic engineering software to achieve up to 90% utilization rates on WAN links between Google data centers. The vast majority of the traffic on G-Scale is internal, however, allowing Google to more effectively achieve the high level of network utilization. I-Scale, on the other hand, is the Internet/customer facing logical backbone that is based on traditional core routers and does not use SDN control or merchant silicon based silicon. Not surprisingly, I-Scale also suffers from the low utilization rates experienced by traditional Tier 1 carriers.
Tier 1 carriers that run national/regional fiber networks have built optical networks, typically with low utilization rates given these networks were designed for peak traffic load rates and up to 50% of the networks were constructed as spare, idle capacity to address various failure scenarios (e.g. fiber cuts, human error when servicing equipment etc.). Such design parameters are likely due to the DNA of traditional fiber optic planning among Tier 1 carriers and the need to compete for service against other Tier 1 carriers. A service outage is a recipe for losing customers. Tier 1 carriers in the US are also not permitted to use Deep Packet Inspection (DPI) techniques to prioritize traffic to help improve transport utilization (which could be especially useful during failure scenarios).
With this is a backdrop, it seems that there is an opportunity for equipment suppliers to traditional telecom operators to “marry” SDN software and optics while also using utilizing other tools like the GMPLS and network analytics to dramatically improve optical network utilization across the WAN and within the optical transport network while also maintaining a high level of service assurance. Even Google has stated at recent industry conference that is seeking solutions from equipment suppliers that can better optimize their traffic across their leased fiber network. Optical transport is a huge cost for network operators, and if the effective average optical transport utilization rates are 25% or less, that seems like a pain point that is ripe for a solution. We are seeing some companies attack this problem within the data center. The start-up company Plexxi is “marrying” optics, SDN control and software based algorithms to address scale and service agility to tackle intra data center pain points. If successful, Plexxi may end up being “Mas Macho” in the data center if it can truly “marry” SDN and optics. I would not be surprised if Cisco data center switching spin-in Insieme may also be looking at some level of SDN control and optics integration for the data center as part of their product solution. We will likely formally hear about Insieme’s product this summer.
As for the WAN, however, start-ups may not be able to address the utilization challenge as VCs, are not typically enthusiastic about funding service provider equipment companies given long sales cycles with Tier 1 carriers and customer concentration issues. Throw in the word optics to business case and that makes for strike three. Thus, traditional optics and/or router equipment companies may have an opportunity to differentiate themselves in solving the high cost, low WAN optical transport utilization rate problem. While SDN is about separating the control plane and data plane and using software applications and network virtualization to achieve service creation and network agility, its initial focus was for Layer 2/3 switches with an “electrical” based fabric within the data center. Optimizing expensive WAN links for traffic flows that span both electrical fabrics within the data center and optical wavelengths across the WAN while dealing with vendor specific optical intricacies such as Forward Error Correction (FEC), amplifier settings, modulation techniques etc. is not likely to be solved by SDN software control alone. To be “Mas Macho” in solving the optical WAN utilization challenge, the solution is likely to require multiple ingredients, including, SDN software control, multi-vendor element management system support and visualization, network analytics, a strong optical pedigree and the use of industry protocols (e.g. GMPLS, Openflow etc.).
It is no surprise that Cisco has acquired both SDN software and optical sub-system companies (e.g. CoreOptics and Lightwire) over the past couple of years. While silicon photonics will play a critical role in achieving single chassis, highly dense routers with 400G interfaces, is Cisco also looking beyond next generation 400G port routers and the broader issue of low WAN utilization rates? Alcatel-Lucent, another company with core competencies in routing and optics also recently announced its Nuage Networks SDN Virtualized Services Platform solution. While Nuage offers some innovative WAN features in service provider MPLS VPNs, it does not address utilization issues in the optical transport domain. Will Alcatel-Lucent seek to leverage its initial Nuage SDN software solution with its traditional competency in routing and optics to address the optical transport utilization issue? Time will tell whether Alcatel-Lucent, Cisco and/or other vendor(s) will be “Mas Macho” marrying software with optics in solving perhaps one of the most significant cost pain points today in service provider WAN transport.