SDxCentral: Congratulations again on your acquisition! How has the acquisition been from your point of view? What’s the overall progress of the Juniper/Contrail integration?
Ankur: “Thank you! Honestly, the integration process has been much smoother than I had initially expected. We have been executing without any disruption – the company leadership is very clear about the opportunity and the challenges and have extended us the support that we needed post-acquisition.”
SDxCentral: You recently announced beta availability for your SDN solution. Tell us how those trials are going.
Ankur: “We started beta tests with customers earlier this year. Now what we’re doing is making the JunosV Contrail Controller available for beta on a broader scale.
From a beta point of view, we were initially very cautious about the number of customers we recruited for our trials. Our focus was on doing a good job of supporting our customers and their use cases. Our customers include carriers, hosting providers trying to migrate to cloud, and financial services.”
SDxCentral: Are the use cases primarily multi-tenancy and network virtualization?
Ankur: “There are two broad uses cases. The first being the enterprise case, where we are working with multi-tenant private cloud.
The second use case is NFV. We are still in the early days here. What people basically are trying to do is to move network services functions from physical devices to a cloud-based or virtualized model.”
SDxCentral: When you say NFV, you’re talking about taking traditional applications and moving them to the cloud. Is your SDN controller’s role to stitch together these functions?
Ankur: “Good question. There are five aspects to our controller’s function for NFV use-case. The first is instantiation of the network service, the second is scaling out the service on demand based on network growth, third is performing load balancing to these service instances, and fourth is monitoring of the services. Typically, many of these service instances do not have the same availability paradigm as physical devices, so we had to build a layer above to provide the similar availability paradigm. The fifth aspect is tying the MPLS circuits into these virtual machines running networking services.”
SDxCentral: In any of the betas today, are you doing service chaining with Juniper products?
Ankur: “Yes, in almost all cases, we’re doing service chaining with Juniper virtualized and physical services. We extensively use it for load balancing and firewall service insertion. Irrespective of the use case, the customer likes the automation part of the service chaining.”
SDxCentral: That sounds exciting! Let’s go back to your vision and goals when founding Contrail. What specific problems did Contrail set out to solve, and how does being within Juniper help that cause?
Ankur: “Contrail’s initial mission was to seamlessly integrate enterprise WAN and private cloud with service provider IaaS leveraging mission-critical L3VPN infrastructure. While it was easy for us to build a team of great engineers to get to an initial set of customers, we knew that scaling would be a challenge with customer acquisition, and technology/service partners. This is where Juniper has been able to help immensely.”
SDxCentral: What are your definitions of SDN and network virtualization? What are the primary customer use cases for them?
Ankur: “We view networking holistically and define it as S3DN: silicon-, systems-, and software-defined networking. That said, at this point of time, my view is that network virtualization is the driving use case for most customers because it integrates better with server orchestration systems, makes it easier to manage the data center, and simplifies their physical infrastructure.
We are also seeing early demand in the service provider segment for NFV. What we realized early on was that the technology choices we made around extending MPLS circuits all the way to hypervisor makes our solution a natural fit in telco cloud/NFV.”
SDxCentral: What’s unique in your approach to solving these use cases?
Ankur: “First, we architected the solution from the ground up to ease the data center operations problem. We designed the controller for uptime, scalability, and in-service upgradability. Two technology choices help us achieve this: (1) lightly distributed control plane vs. DHT database like Cassandra – the state of the system is not maintained in a central entity; and (2) the controller operates on topology and policies instead of flows.
Second, we decided it would be a lot easier for customers to adopt network virtualization if they could interoperate with their existing gear without requiring a rip-and-replace or front-ending their high-end routers with slower software gateways. This is why we made the technology choices around BGP for control plane and MPLS or VXLAN for data plane. This allows us to natively interoperate with existing routing and services gear without any scalability challenges of software gateways.
Third, we decided to use L3VPN and E-VPN as underlying technology for network virtualization. This meant that we could seamlessly federate across enterprise WAN networks, enterprise private clouds, and service provider public clouds.”
SDxCentral: It seems like you’re leveraging known technologies such as MPLS and BGP to solve these problems. What are the pros and cons of doing so? Does OpenFlow feature in your SDN suite?
Ankur: “MPLS and BGP selection is orthogonal to the selection of OpenFlow, as all three can co-exist in the system. We also use a few other standard-based protocols in the system. The important thing to note is that we have designed the system for scale-out from the ground up, and there is no dependence on the individual protocols as new ones can be added or replaced.
On the compute node, we run two processes – a vRouter (instead of a vSwitch) and an agent that works in tandem with the controller. The vRouter and the agent operate on a flow basis, and this can easily be supported using OpenFlow.
However, we decided that our controller-to-agent interaction will not be reactive (as OpenFlow v1.3). This helps with both latency and scale. In order to support signaling of topology, policies, and configuration, we chose XMPP between the agent and controller, as it is a well-understood protocol.
BGP is really used in the system for network virtualization in the control plane. It’s the protocol we use to signal the L3VPN or EVPN topologies to other controllers or physical or virtual routing and services gear across data center(s). The protocol is widely deployed in networking gear, and there was no reason to pick something different or proprietary. For configuration of networking gear, we chose Netconf, as it is widely supported by vendors like Cisco, Juniper, etc.
Finally, MPLS is one of the primary protocols we support in data plane, the others being VXLAN and NVGRE. We chose MPLS because it allows us to natively interconnect with WAN routers and service provider networks. It also helps with much finer grained traffic engineering across WAN. Since we use MPLS over GRE/UDP, it obviates the need for a native MPLS fabric within the data center.”
SDxCentral: What’s your view on overlay approaches versus direct fabric programming?
Ankur: “There are two aspects to the selection of overlays: performance and simplicity. From a performance point of view, the server technology is mature enough to handle 10GE throughput with little overhead. We are working with 40GE NICs, and as Intel makes advances, it will become clearer if it continues to make sense to extend the network edge to the server.
From a simplicity point of view, overlays help with isolation and simplification of the physical switching fabric within the data center. Given the fact that server performance is adequate today, direct fabric programming provides no gains while increasing the complexity and reducing the scale of the physical fabric.”
SDxCentral: We’ve heard you talk about “SDN as a compiler.” Tell us more about that.
Ankur: “We think of SDN as a compiler instead of an operating system. Our view is that the user should not have to deal with the complexity of control-plane/data-plane separation, routing protocols, or flows. The end user is better served with a compiler-like approach where the outcome is described in a higher-level language, and the compiler does the transformations to a lower-level language understood by the physical and virtualized entities.
The user is expected to say, “Here are my networking requirements, my security requirements, and my performance requirements. Reserve the resources, elastically scale them, and provide analytics to prove that the requirements are met.” We have a formal language that we use for connectivity, policies, and service chaining, and it can be extended over time as new services and applications get designed.”
SDxCentral: Juniper recently announced a new datacenter switch. It has OpenFlow support in the EX and MX, and then there’s QFabric. How should prospective customers evaluate which product is right for their data center?
Ankur: “Juniper has a varied customer base,so we need to provide customers with choice based on their use case.
Some customers request OpenFlow for varying reasons, and we support them through our physical hardware gear. For example, some are using OpenFlow to program physical switches and routers in a bandwidth calendaring application. Some customers develop their own controller software based on open source products like NOX, Floodlight, etc. Our newest switch, EX9200, enables such customers with additional flexibility in both hardware and software – extensibilityin switching ASICs as well as plug-ins through JUNOS.
QFabric is our architecture for a simplified, single-tier fabric in the data center. This has been well received by customers who want to run both virtualized and non-virtualized assets and gain the benefits of network virtualization in a legacy environment.
Now we have JunosV Contrail that provides best-of-breed network virtualization for both virtualized and non-virtualized environments. It allows our customers to take a multi-vendor switching environment and make it look like a unified fabric.”
SDxCentral: How does your solution integrate with existing networks in both the datacenter and outside? What’s the migration strategy for existing networks?
Ankur: “The interesting thing about JunosV Contrail is that we use a combination of overlays and standards-based management, control and data plane protocols. Overlay allows us to easily ride on top of multi-vendor switching fabric, and standards-based protocols allow us to integrate with existing multi-vendor routing and services gear. This eases the migration issue for existing networks.”
SDxCentral: Given Juniper’s position in network security, what’s the role of the Juniper security products such as firewalls, IDPs, VPNs, etc., in the Juniper SDN roadmap?
Ankur: “Security and analytics are a very important component of the Juniper SDN solution. JunosV Contrail natively supports both L3VPN and EVPN, and we will also support IPSec data plane in the near term. We use the concept of service chaining to seamlessly weave in security services like threat mitigation (Junos WebApp Secure), IPS, and firewalls, both physical (SRX) and virtual (Firefly). Service chaining also allows us to integrate third party services like security, and application delivery.”
SDxCentral: Finally, what are your thoughts around OpenDaylight?
Ankur: “In my opinion, OpenDaylight will go through its struggles over the next nine to 12 months and will eventually settle around the OpenStack model — an API framework against which all vendors develop networking services or extensions. At the same time, each vendor will be able to provide a commercial-grade implementation that will preserve the economic interests of those involved.
JunosV Contrail is a scale-out networking system, and the controller is just one of the many components in the system. OpenDaylight will foster common APIs where protocols may take longer to evolve or may not be the best fit. This is aligned to our multi-vendor strategy. As a result, I can definitely see OpenDaylight getting integrated into our commercial offering.
BTW, if your readers want to learn more about our SDN strategy, they should visit www.juniper.net/sdn.”
SDxCentral: Thank you for your time!