We hope you are enjoying our interview series at SDxCentral. We’ve been having a blast, chatting with a wide variety of SDN community participants, from CEOs to security experts to open-source project leaders. This week, we have a different flavor of featured interview. There’s been buzz around how SDN can be applied to wide-area and carrier networks but we’ve had few examples to date. The folks over at Cariden, experts in traffic management, turn out to have an interesting perspective on SDN, with some compelling real-world examples. We present here an interview with Jeff Bazar, VP of Strategic Products over at Cariden.
SDNCentral: For some of our readers unfamiliar with Cariden, can you provide a brief overview of your company?
Jeff: “Absolutely. Cariden is a self-funded company founded in 2001 and profitable since 2003. We have emerged as the de facto standard provider of IP/MPLS traffic management software. We provide planning, traffic engineering and operational intelligence software for carrier networks worldwide. We serve the majority of Tier 1 service providers and are proud of the market validation we have had with consistent growth, 50% CAGR since 2008, and extremely high renewal rates of over 95% in deployments that include many of the most challenging networks in the world.”
SDxCentral: You are well aware that SDN is hot and we’ve seen a large number of vendors jumping on the SDN bandwagon. I have to ask: does SDN fit into Cariden’s core strategy, or are you a bandwagon jumper?
Jeff: “Interestingly enough, software-defined networking is a good umbrella term for operational solutions that Cariden has been delivering for many years now—you could say that from our perspective, SDN is a revolution a decade in the making. I’ll cover some of that detail when we talk about customer use cases.
Nevertheless, in response to heightened market interest and readiness, we’ve developed an admissions module and integration with OpenFlow controllers for our recently announced NS-OS.
Our path has always been to work with our technology-leading customers to bring the benefits of network visibility and automation. We will continue that path with the NS-OS and will continue to publicly demonstrate software elements such as the admissions module and OpenFlow interface periodically as we work towards a general release of the NS-OS.
In our most recent public demonstration (Carrier Cloud Summit 2012), we highlighted our new distributed cluster technology for answering admission requests in real time at the rate of four hundred thousand queries per second on a single commodity server. We will be holding similar demonstrations for OpenFlow integration in the Fall.”
SDxCentral: What distinguishes Cariden from other SDN companies?
Jeff: “Most importantly, Cariden’s SDN bona fides are counted in deployments and renewals rather than trials and PoCs. We have had a head start on providing a software abstraction of network resources in production environments. All other companies who want to participate in this space must go through the vetting and maturation that Cariden has already passed.
Secondly, SDN will be adopted gradually and incumbents such as Cariden have an advantage in being a trusted partner for providing safe passage to the future.
Thirdly, customers appreciate fully working solutions. Cariden has a complete suite for Infrastructure SDN and has the partnerships in place to provide infrastructure awareness for virtualization and micro-timescale visibility.”
SDxCentral: Let’s delve more into these working solutions you mention and cover some details around what you term as Infrastructure SDN. What key problems are addressed by Infrastructure SDN?
Jeff: “Infrastructure SDN is the umbrella term we are using for the software that manages a pool of network resources.
I’ll discuss four use cases that help describe Infrastructure SDN. You will notice a theme here. There are large benefits in centrally managing pools of resources and that management requires a coordinated combination of global visibility, predictive modeling, service intelligence, and device control.
1. Managing a pool of transoceanic network resources
Example: One of our customers exchanges over a quarter terabit per second at peak time between South America and the United States. They pay dearly for the connectivity between the two continents and would like to make full use of that capacity but they are stymied because the connectivity is first split four ways between two pairs of cities, then amongst twelve routers, 20 interfaces and almost 100 ports. The routers do not have a global view and the operators are challenged with a difficult balancing act.
A centralized view/control model was the key for us in abstracting the various ports, interfaces, circuits and city pairs such that the two sides of the network basically saw a transparently managed 400Gbps pool of connectivity between the networks on each continent.For this solution, Cariden’s discovery software provided a global view of the network elements, while the modeling software provided a view of traffic under failure, and the path layout intelligence provided the load-balanced paths. Finally, we used the existing router controls to push the paths down to the network elements. The results were an increase in utilization from 70% to 95% and correspondingly over $20M per year savings in just this one section of the network.
2. Managing a pool of fiber resources
Currently, the most failure-prone portion of the WAN is the fiber infrastructure. The router chassis and line cards have high availability but fiber cuts are inevitable. The challenge in managing fiber resources is being able to “reprogram the network” to respond to failures in a timely manner: seconds and minutes, as opposed to hours and days.
At one customer, for example, failure mitigation would regularly take three to four days. With software automation, we were able to use an up-to-minute global view, a traffic model, routing intelligence, and deployment automation to cut the mitigation time down to minutes by automatically providing a way to steer the traffic away from the links on the broken cable.
The results were that Mean Time to Mitigation (MTTM) decreased from four days to 10 minutes, and of course there was an associated reduction in customer complaints.
3. Managing data center interconnect resources
Here, the challenge is not just in controlling the geographic spread in network resources but in controlling the spread across time, and providing an API to applications that need network interfaces. For example, applications such as data center backup and VMotion would use the API to get a slice of the network at its most economic time and place.
Google’s demonstration at ONS 2012 is a good example of this use case. Here again, the key is the combination of the global network view, a traffic model (especially across time), service intelligence to route according to latency and resiliency requirements, and device control, which was achieved with OpenFlow in this case.
The results reported by Google were that utilization increased from 40% to 100% for a much better optimized network and increased flexibility and control over the resources.
4. Managing resources within the data center
Although the least talked about area, data centers have a need for actively managing network resources. As pointed out by SDN analyst David Lenrow, virtualizing solutions like those offered by Nicira manage the network bindings but “abstract the physical network and specific topology out of existence.”
But just as storage devices are managed by storage virtualization—someone has to ensure the individual disks do not get overfilled and that the storage is resilient —network resources similarly have to be managed by virtualization software to ensure that switches and ports do not get overfilled and that the bandwidth is resilient.
Data center traffic differs in that microbursts add an additional variable to the need for explicit management of large flows. Our customers are maintaining low average network utilization within the data center, even though this does not fully guarantee protection against microbursts.
We currently provide the visibility, model, intelligence and control of jumbo flows, and we will work with our hardware vendor partners to provide the microburst visibility as well.
Current results are that our customers obtain the ability to use the backup data center live for a 2x savings and enjoy the increased headroom while we work to provide more granular visibility.
In summary, the problem of managing pooled resources is paramount across the networking spectrum and must be handled by a solution encompassing four software areas. Here’s a graphical summary:
And the following table describes the tasks performed by each area.”
|Global Visibility||Answer to: What is the path between cr1.nyc to cr2.sjc?|
|Predictive Model||Answer to: Will there be enough resilient capacity to provide 500Mbps between cr1.nyc and cr2.sjc, next Monday between 9-10AM?|
|Answer to: Which data centers would be able to handle a 500Mbps cloudburst service and when?|
|Device Control||Set up the access to admit a particular demand and implement a route that was decided by the path computation element.|
SDxCentral: Very impressive results indeed. Can you relate the functionality you just discussed to your recent NS-OS announcement? And could you also explain how network controllers, especially OpenFlow controllers, play a role here?
Jeff: “Definitely. A few weeks ago (August 8), Cariden announced the Network Service OS (NS-OS), which encompasses the four areas of functionality previously mentioned: global visibility, predictive modeling, service intelligence and device control.
OpenFlow controllers, and controllers in general, are only seen to be providing the global visibility and device control. There is no meaningful sense in which network controllers would be expected to provide a central resource model across time or path computation functionality. That functionality is properly relegated to be independent of the controller or controllers at hand. The NS-OS is a complete solution that includes interfaces to the controllers, the central resource model and a service-based (not just a network-based) API to applications. The NS-OS integrates with OpenFlow Controllers to gain control and visibility of OpenFlow-enabled devices. The NS-OS also embraces visibility to the whole network by supporting existing and upcoming visibility and control protocols such as PCEP, BGP-LS, IRS, SNMP, NETCONF, XML, vendor-specific CLI, and various EMS implementations. We believe this is important because our customers need us to support network equipment deployed in their current production networks.”
SDxCentral: Speaking of PCEP, software control products, and Path Computation Elements (PCEs) in particular, have been around for a while with limited adoption. Do you expect SDN to avoid that same fate?
Jeff: “SDN will fare better than PCE did ten years ago. To be fair to PCEs, Cariden’s PCE has been adopted in many high performance high control environments (see our most recent PCE announcement with LINX) but your point for lack of wide adoption is valid. In general, we’ve noticed that industry intelligence astutely visualizes the future but has more difficulty predicting when that vision will be widely adopted. For example, packet-optical integration and the primacy of IP in the metro were heavily discussed in 1999 but are only coming to fruition now.
Similarly, software control of the network was foreseen years ago, and in fact there were a large number of highly financed companies in this space around 1999-2002 but the timing was not right. A variety of factors make the timing right now:
1. Operators have shifted their focus from the hardware to the software and operations
2. Cost pressures for ISPs and Cloud Service providers have intensified, making cost reduction a matter of survival
3. Presence of extremely cheap (yet capable) hardware has made the potential returns much more meaningful than in the past.
Another positive indicator is that this wave is customer-driven; this can be seen, for example, by the composition of the ONF.”
SDxCentral: To wrap up, do you think that your concept of Infrastructure SDN is a complement of the current network virtualization offerings, independent of them, or competitor to them?
Jeff: “It is a complement. Cloud service providers will need to (1) differentiate their services by providing SLAs (for example, bandwidth, latency, and resiliency guarantees) and (2) more efficiently deploy their network resources. As you know, the price war has already begun between the largest providers so differentiation and efficiency are already important.
Today’s multitenant solutions such as OpenStack Cloud Management Framework with Quantum simply provide an overlay and must be augmented with visibility and control of the underlying network resources to meet these challenges.”
About Jeffrey Bazar, VP, Strategic Product
Jeffrey is leading the expansion of Cariden’s business portfolio into adjacent markets and supporting the sales organization. Prior to joining Cariden, Jeffrey worked in the Technology Group at American Capital, a private equity firm, pursuing venture investments. Jeffrey also has extensive networking industry experience gained during roles co-founding iPath Technologies and building Verizon’s first national MPLS/IP backbone. Jeffrey has an MBA from the Sloan School of Management at MIT and a BS from the University of Rhode Island.