SDxCentral’s Scott Raynovich recently sat down with Cisco’s Balaji Sivasubramanian and Vipin Jain to discuss the benefits of containerization, top concerns, areas of focus, the challenges of container networking, and infrastructure policy for microservices deployments. Get the latest on Cisco’s Project Contiv, containers, & microservices read the full interview below and interviewee bios below.
Balaji Sivasubramanian is head of product management and strategy for cloud native solutions at Cisco. In this role, he focuses on solutions addressing emerging use cases around micro services and containers. Prior to that Balaji spent 11 years at Cisco leading global P&L for $1B+ product managing a global team. Balaji is also a co-author of 2 popular books in the IT industry. Balaji has Bachelor/Masters in Computer Engineering and MBA from Wharton School.
Vipin Jain leads Project Contiv work, an open source initiative sponsored by Cisco, to address networking challenges for containerized applications in the DevOps world. Prior to this, he worked on ASIC design and software implementation for highly successful Nexus, ACI and UCS products at Cisco. He worked in multiple successful startups with the lead technical and management roles.
SDxCentral: Our research indicates a lot of interest in Linux container technology and other microservices, but it still seems early. Are your customers talking about it?
Balaji: Container adoption in production is definitely in the early stages. Application developers are trying to adopt it, but the overall tooling and IT processes around containers are in a nascent stage. Early adopters are looking at containers as a way to reduce the hypervisor tax, and also get increased density from servers. Our customers are actively looking at Containers and Microservices-based application architectures. We are seeing adoption across the spectrum from small to large enterprises, and service providers.
Vipin: The benefits of containerization are real – improved resource utilization, predictable application deployment, and a very agile environment for delivering applications at the speed businesses need in order to be competitive. These benefits are measurable, and thus, it is not a surprise customers across the board are evaluating it with many looking to go into production this year.
SDxCentral: Containers and microservices seem better from some applications than others. What are the top use cases for containers?
Balaji: Containers are best for applications that have a specific purpose, and yet, are forced to use a virtual machine (VM). Monolithic applications are usually a function of multiple micro services and can’t easily scale in a single microservice if that service becomes a bottleneck. It may require an entire VM to scale up, or scale out, which is typically slow and expensive. However with a microservice architecture, it is possible to rapidly scale out specific services much more efficiently.
A primary use case for containers is around increasing development agility through continuous integration, and continuous development. Containers lend themselves well for this automated way of building, testing and running applications in production. There are also customers looking to replace services, offered via virtual machine with containers, to reduce the footprint of these services. But one of the biggest use cases where customers are considering containers, is around re-architecting the application architecture around microservices to solve development agility and ability to build scaling applications.
Vipin: While it is not necessary that all containerized applications be microservices, a good number of Linux and Windows applications will be containerized. As of today, the top use cases include predictable application delivery for traditional and cloud-native apps, often referred to as a Container-as-a-service use case. There are also derivative use cases which include Continuous-integration/continuous-delivery, scale-out micro-services, improved test environment for applications, changing application build processes, resource pooling, consistent on-premise and cloud deployment of applications for hybrid workloads, etc.
SDxCentral: Do you think that containers compete with or are a replacement for other technology, such as virtual machines?
Balaji: Today, some customers are adopting containers as a replacement for VMs in certain applications. VMs have a heavy footprint whereas containers are lightweight (memory, CPU and RAM) given that multiple containers share the same operating system. In one instance, the customer believes they can run the CPU up to 80% utilization with container high density scheduling – a substantial improvement over the 20% utilization in a VM-based deployment. Containers provide 10x greater density over a VM-based deployment. Of course, containers introduce additional complexity in the sense that an application is split across multiple servers or data centers. A few years from now when operations and tooling around containers become more robust, we expect containers to be the major form factor used in future data center and cloud build outs.
Vipin: Containers and VMs virtualize different things and operate at different levels – so in that sense they don’t compete, and you can deploy containers within VMs or without VMs. The big change is with the movement of the control point, which now resides up within an OS, and rightfully closer to the application. In a containers world, VMs no longer provide the primary method to acquire dynamic resources (compute, storage, and network) for applications. Thus, they need not be central to all software infrastructure layers – that’s the real disruption.
However, it’s important to keep in mind that orchestration tools need integration or consolidation of some sort, and containers provide this. As an example, some container schedulers can take VMs as a schedulable unit inside a container.
SDxCentral: What are the key challenges to implementing container technology? Is security one of them?
Balaji: Tooling around container technology is still very limited or in the early stages. Deploying large scale containers from the developers and operators standpoint is not as commercialized as compared to a VM-based infrastructure.
Operators lack the ability to enforce needed operational policies for the infrastructure that is required for containerized applications to be deployed in production. Today’s container stack lacks communication to the underlying infrastructure, and vice versa, making it harder to monitor and ensure application performance.
Networking is fairly limited, and only as recent as six months ago, did docker networking address multi-host networks. Hence, advanced security, multi-tenancy, isolation, monitoring etc. are either too early or simply not there. Without such robust feature sets, applications are not scalable.
Customers are creating custom container stacks by leveraging different open source projects lead by different vendors. Getting commercial support for a production grade end-to-end stack is missing.
Vipin: Predictable resource acquisition, user/tenant based quotas, monitoring, security, migrating applications are the top challenges. And yes, security is certainly one of them with network security, storage isolation, image validity, compute resource isolation, authenticated access controls, etc. all falling in the bucket of security.
SDxCentral: What’s the advantage of having your own container program, Project Contiv? Is it open or does it only work in a Cisco environment?
Balaji: We have been following the container emergence closely, and being an infrastructure vendor, we try to understand the impact of containers on the infrastructure. Some of the impact is around scale of end points as it needs to be managed, or tracked, across the infrastructure. Another big need is the ability for infrastructure to react quickly and apply policy as microservices applications scale (At 11:20 min in the below video, watch Vipin demo policy enforcement in a container environment). Even though application composition frameworks like Docker compose, kubernetes pod definition are adequate in defining the applications, they don’t address the application’s infrastructure initiation and operational requirements.
We created Contiv to specifically address the framework for an operator to define operational policies for infrastructure with containerized applications. It addresses the ability to specify network, compute and storage policies for applications. It also offers an implementation for the enforcement of policies. Contiv comprises of 3 components – Contiv Network , Contiv Storage and Contiv Cluster. All of these are available as open source project on github.
Project Contiv is meant as a community project, and it is getting interest from community members. They see the value in this effort and are actively contributing to the project.
Contiv will work independently on any infrastructure. We have also enabled integration into our infrastructure products like Cisco Nexus/ACI and Cisco UCS. For example, Project Contiv can drive creation of Cisco ACI policies in an automated fashion, and it’s a proof point of how we automate policies for both the physical and software aspects of the infrastructure.
Vipin: Project Contiv provides an open source policy framework to provide predictable, secure resource acquisition of compute, network and storage resources for containerized applications. It was a natural fit for Cisco to lead this effort and give back to the community, to use and leverage the code for their use cases and products. It benefits everyone to have an efficient and precisely governed container deployment.
SDxCentral: How do containers fit together with Open Source? Are they part of the same movement?
Balaji: With any technology these days, the community is interested in creating and hardening solutions through open source development. For end users, it offers an ideal way to experiment with, and implement, new technologies without being locked into a single vendor’s solution. In fact, many of these end users end up contributing their customizations back to the community. Container technologies from container runtime, scheduling and orchestration tools have been open source and continue to be open source. I would argue that container, being a newer technology, is born in the open source community when compared to other technologies out there. Two key container related Linux foundations include the Open Container Initiative (OCI), defining the container runtime and format, and the Cloud Native Computing Foundation (CNCF) who define other aspects of container stacks including scheduling etc.
Vipin: Containers and open source are two different movements. While at lot of container related work is open sourced, it is also true that many are building closed-source components for containers. As a developer, the best part about containers and open source is that the code is not take-it-all or leave-it. Instead, small packages of code are being built which brings out the best from a distributed global workforce. It is also allowing the best of tiny, very-useful, well tested, code and units to come out as fundamental lego blocks for the future.
The competitive boundaries are blurred as a result because competitors are either sharing the same code underneath, or reusing each other’s code and concepts – overall a very healthy situation for end users.