Kireeti Kompella is back at Juniper Networks. Formerly CTO and chief architect of JunOS at Juniper, Kompella left last fall to join software-defined networking (SDN) startup Contrail as CTO, only for Juniper to acquire Contrail within months of his arrival. Today Kompella is chief architect and CTO of the Platform Systems Division at Juniper, where he spent 16 years of his career, and is active in the Internet Engineering Task Force. He has deep experience in packet transport, large-scale MPLS, VPNs, VPLS, and Layer 1-3 networking. Prior to Juniper, Kireeti worked on file systems at NetApp, SGI, and ACSC (acquired by Veritas).
Kompella: Network virtualization is the networking world’s response to compute virtualization. The technology underlying compute virtualization is the ability to emulate one CPU over another, to run an OS over another. This is nice, but not earth-shattering – until you throw in orchestration. Orchestration changes everything. You can deploy a new app that requires 50 servers in minutes (as VMs) rather than months. Now it’s time to bring this agility and efficiency to networking.
The SDN controller’s role is to give network virtualization the same level of agility, automation and elasticity. Start with a good underlying technology for partitioning networks (VPNs, closed user groups — let’s call it virtual network, or VN). Then, support analytics-based VM mobility, define the networking (including networking services) requirements of a VM, and make the VN responsive to network conditions using real-time analytics information. There is so much that can be done once we’ve made the leap from manual provisioning of physical network devices to orchestrating the network.
SDxCentral: Speaking of the SDN controller, I’ve heard you speak before about SDN as a compiler. Can you explain a little bit more about this? What’s the assembly language equivalent in this framework?
Kompella: Two things are needed to make network virtualization accessible to application developers and IT managers. First, we have to get away from configuring devices, using a CLI, and nailing down physical devices and ports. This static approach kills agility. Second, we have to get away from low-level details, the vagaries of a particular box or OS, networking terminology. This alienates the DevOps folks and scares away app writers.
We need to allow folks whose primary concerns are compute, storage, and applications to specify the connectivity, security and other networking requirements. This leads to the notion of a high-level, abstract, declarative specification of the connectivity needs of apps and VMs. The SDN system transforms this into the assembly language (the device-level configuration) that satisfies this specification — hence the paradigm of SDN as a compiler.
This approach is a far cry from how networks are provisioned today, but this is an absolutely necessary change, both in data centers and in other parts of the network. This approach is gaining a lot of traction from service providers as well as data center operators.
SDxCentral: Within the framework, how can we be sure that the high-level language you’ve defined is sufficiently complete in terms of all the problems we need to solve? What are the core elements in your model of the network?
Kompella: Excellent point. We know that the high-level language is incomplete: we have so much to learn, and so many domains to tackle with this technique. We believe this approach is vital for other parts of the network – access, edge, and core. Each of these will need its own extension, meaning a specification of some sort: “VN1 can only connect to VN2 via a firewall.” Today, these specifications can be done as data models, which are extensible in a clean, backward-compatible way and allow for automatic generation of many types of APIs. I don’t want to get into a religious debate of “data models” vs. “languages,” but there are many advantages to data models.
We defined as a data model a core set of connectivity features for virtual networks in a data-center as well as certain NFV use cases in SP context. This can be easily extended as we learn more and as people give us new requirements. We are looking at data models for expressing the needs of access networks, edge networks, and core networks in the WAN. Extensions can be standardized or remain proprietary, which is an appealing aspect of this approach.
SDxCentral: Let’s move on to another area in SDN that’s been talked about but hasn’t seen many concrete examples to date: analytics. What’s your view on the value of analytics in SDN?
Kompella: Analytics is huge. There are three very different aspects of analytics. The first is for diagnostics and visualization. A top-down, high-level approach is great when you’re laying out connectivity between the defined set of VMs and virtual networks. But when things aren’t working, you want to go bottom-up: Look at the data, zero in on anomalies, and work backwards to where the specification went wrong. This means having a complete set of data going back days, weeks or even months and years.
This is not just about traffic statistics but application and user level information. This means constantly gathering control plane information; knowing what events occurred when; what applications were spawned; what resources were utilized; when (and how many) flows started and ended and from where to where; as well as packet and byte counts between VMs, between VNs, and within VNs. Analytics is the debugger that goes with the compiler paradigm.
The second aspect of analytics is capacity planning, where one looks at the historical data for trends and makes predictions backed by data. Do I need more CPU or memory resources? More firewalls or load balancers? Along with analytics, a “what if” scenario planner is vital.
Finally, analytics can feed back into the SDN system via machine learning. In today’s dynamic world, decisions have to be revisited constantly. To make (and remake) an informed decision, you have to have data. The idea of analytics being pumped back into an SDN system so it can add and/or migrate VMs or resize service instances through an orchestrator, or update ACLs in real time, is both exciting and scary. But the only way we’ll learn to use resources at peak efficiency is by constantly monitoring the resources and making corrections in how we deploy them.
SDxCentral: Now that Juniper has deployed Contrail in the marketplace, can you share a little about how those deployments are going? What are you seeing and learning from them?
Kompella: In our beta cycle, we have received input from global carriers, emerging online/cloud providers, and large enterprise customers. Overall, they have been very pleased with the VN/policy-based workflow and the dashboard/analytics information that is being presented in the UI. They have been able to deploy and scale their tests more quickly because Contrail is a pure software solution that interoperates seamlessly with existing WAN gateways and data-center switching fabrics. Today, we have over 40 customers using Contrail in beta and these include customers like Sungard, Jaguar Networks, ISPrime, and Cyberport.
SDxCentral: In these deployments, are these primarily in enterprise data centers or cloud service providers? How big is the role that service chaining plays, and are you seeing any WAN integration requirements pop up?
Kompella: It has been an even mix across enterprise and cloud SPs, though the use cases may be different. As security integration has been a major gap for many private/hybrid cloud environments, we are seeing good traction in our virtual private cloud solution.
Service chaining has been a critical component for both ITaaS and network functions virtualization (NFV) scenarios, as customers want the flexibility to insert both physical and virtualized IP-based services without major changes to policies and while still meeting compliance targets.
WAN integration has been a major component but is simplified with our standards-based IP VPN-based approach. Our customers see the solution as a natural extension to their existing environments and are now seriously defining an evolutionary path to enable cloud network automation across virtualized data centers and WAN.
SDxCentral: NFV is getting hotter in the marketplace. Where do you see SDN and NFV intersect? Do you see NFV impacting Juniper’s routing business or security business?
Kompella: NFV proponents are quick to point out that NFV is different from SDN. They are completely right. But virtualized network functions mesh beautifully with network virtualization, orchestration, and automation. NFV provides the base technology that allows us to escape the physics of the devices that perform network functions. SDN offers a high-level specification of NFV service chaining and provides the analytics for debugging, capacity planning, and real-time resource optimization.
Juniper has a platform for virtualizing network functions to run on industry-standard high-volume servers instead of purpose-built appliances. The next step is orchestrating this platform using the Contrail framework to create NFV service chains, elastically scale these up and down as needed, modify them as required, and tear them down when no longer needed. We believe NFV will positively affect all aspects of our business — routing, switching, and security.
SDxCentral: Kireeti, thank you very much for talking with us today.
Kompella: Thank you for having me.