The growing adoption of container-based applications is unsurprising given the exuberance with which developers have embraced the agile, portable, and eminently orchestratable technology. But when those apps move into production, their orchestration solutions have a rather dramatic impact on operational complexity. One of the top barriers to DevOps success remains complexity, cited by 11 percent of respondents to a Quali Survey on challenges associated with automation and orchestration of cloud-like systems.
Part of this complexity stems from the increasing decomposition of applications into smaller, consumable chunks often referred to as “microservices.” Each microservice becomes a scalability domain in and of itself, requiring load balancing and security as well as networking support. Increasingly deployed in containers, these primarily software-based solutions require a great deal of intra-container and intra-cluster communication, increasing the volume and frequency of what’s come to be known as East-West traffic. Traditional network-hosted solutions for scale (load balancers and the like) must adapt, not just by delivering capabilities via software but by addressing the rather extreme demands on keeping up with the extremely dynamic nature of container-based environments.
To that end, there is a distinctive architectural pattern emerging in which a lightweight load balancing app proxy resides in the container cluster to facilitate scale and assist in intra-cluster routing between services. When applications are broken down, their internal communication patterns become external and must occur over the network instead of within the application. This introduces complexity as well as the need for scale, which is subsequently addressed by the inclusion of East-West focused load balancing proxies.
Architecturally, there must still be scale at the application level, which necessitates the inclusion of an upstream service capable of providing scale, performance, and security on the North-South network. This is usually provided for by an application delivery controller (ADC) or other load balancing-capable proxy (in software or hardware) and facilitates communication between users, things, and applications external to the cluster.
This pattern holds true across container cluster solutions, whether Kubernetes or Mesos or OpenShift. What also holds true is the additional complexity introduced as well as the demands on the proxies to provide not just traditional functionality but to support the application program interface (API)-driven, software-based model upon which containers, cloud, and applications are now being built.
The cloud, too, is evolving and so is a new model of computing emerging known as “serverless.” Better described as “function-as-a-service (FaaS),” this model assumes an even greater degree of dynamism than cloud or containers, firing up resources not on-demand, but per-request. The ability of the supporting network to respond to such an environment is critical to the success of such models in achieving its intended benefits.
In response to these challenges, the network must — as always — address the need and demand for speed. But speed of communication is not the only concern these days. The speed with which a load balancing service – whether intra-cluster or upstream serving the application as a whole – can update its pools of resources is now critical. As containers appear and disappear, they must be added and removed from available resources with alacrity. This puts pressure on the load balancing service to ensure its API is as scalable and performant as its core functionality. This is particularly true of those solutions servicing intra-cluster communication: the East-West traffic that flows between microservices and applications.
Integration into the automation and orchestration of containers within clusters like those orchestrated by Kubernetes, Mesos, and OpenShift, must be accomplished from the network side. No longer is the network in charge, requiring others to “ask” for changes in configuration. The network is expected to be aware of application needs. It is assumed that the network will automatically adjust its own configuration (software-defined networking (SDN), anyone?) and behavior based on the information offered by the cluster “master” nodes that manage, schedule, and orchestrate container lifecycles. To that end, services like load balancers must be able to monitor and understand the various tags, labels, and messages that announce cluster activity and react to those cues. For example, when Kubernetes announces the launch of a new container, it is the responsibility of the load balancer — both intra-cluster and upstream — to recognize the event and respond accordingly by adjusting its own configuration. Conversely, when containers “die,” the network needs to respond by removing them from load balanced pools, routing tables, and security-related policies. This must happen at speed to avoid inadvertently distributing requests to an already unavailable resource.
These actions must occur at speed, lest availability, or scale will be compromised. A scalable, fast management plane is a requirement for networking services that expect to remain relevant and useful in container and even cloud-based environments. Both are highly reliant on automation and orchestration and thus require as much attention to its application program interface (API) as to its core capabilities. Without a fast, easy to use API, networking solutions run the risk of becoming obsolete and replaced by those able to rise to the challenge of a predominantly software-based world.
Cloud forced the network to adopt a software-defined approach, both in form-factor and operation. Containers are forcing further change in the communication patterns that must occur to enable scale, routing, and security to work in concert with these increasingly volatile environments. Networks must adapt to a model that increasingly demands they react to application changes without direction, integrating more directly with the application infrastructure than in the past. Networks must, finally, become truly application-aware to support the application delivery models of the modern world.