Faced with an accelerating rate of technological change, business and IT execs are turning to containers to help solve their business problems. There are ample grounds for them to see containers as the solution to several business problems. Foremost among those problems is coping with an accelerating rate of technological change affecting all industries.
Although the term “digital transformation” is both overused and poorly defined, the fact remains that every business is becoming a digital business. Their processes, internal and customer facing services, and even products are shaped by IT in ways few dreamed of in the era of PCs and mainframes. This metamorphosis is fueled by the nexus of five factors:
- Cloud Services.
- Constant, ubiquitous connectivity and mobile clients
- Intelligent, connected, data-generating devices
- Exponential improvements in data storage costs, performance, and density
- Machine Learning (ML) and other sophisticated data analysis and Artificial Intelligence (AI) techniques
These have created a business environment of near constant change, where the ability to rapidly adapt to and exploit changing circumstances and spot trends is not only a competitive advantage, but often the difference between success and failure. IT and developers are the foundation for such digital dexterity. They must respond to changing business conditions, customer trends, and competitive demands, with the speed of a cloud-native startup. Consequently, the era of months-long application development and equipment deployment cycles are history.
Server virtualization, and the management automation software that followed, provided the first stage of infrastructure agility. However, organizations now are compelled to move even faster, while stretching existing IT resources to do more, since budgets can’t keep up with the escalating pace of application and technology demands.
Containers have exploded in popularity because enterprise developers, IT, and even business leaders have learned what cloud builders like Google discovered long ago: Containers can significantly improve infrastructure and application efficiency, agility, and reliability. Their advantages fall into five categories:
Containers significantly reduce the overhead on compute resources — particularly memory. This enables far more workloads per physical server since each share a single host OS, meaning only one copy of the kernel and system libraries is running per system. A 2018 Sysdig survey on container usage quantified the median number of containers per host increased 50 percent, from 10 in 2017 to 15 in 2018. In one case, a respondent reported running 154 containers on a single system. While the median is probably typical of the consolidation ratio of most virtualization servers (where a consolidation ratio of one Virtual Machine (VM) per physical core is a rule of thumb) the extreme case shows that containers have the potential to scale far higher — perhaps to five or more containers per core, which translates to 320 containers on a dual socket system using 32-core CPUs like the AMD Epyc.
Greater scalability is garnered through the use of container orchestration systems like Kubernetes that can automatically place workloads on the least-used node in a container cluster and dynamically scale the number of container instances up and down in response to changing application usage. Indeed, the 1.14 version of Kubernetes supports clusters of up to 5,000 physical nodes and 150,000 pods and 300,000 container instances. Note that pods are the smallest unit of deployment in Kubernetes and typically contain a single container instance; however, it’s possible to run multiple instances per pod, hence the discrepancy in the above numbers.
Greater Agility in Application Delivery
This increase is a result of the low resource overhead of containers that allows new instances to be deployed in seconds. The automation provided by orchestration platforms like Kubernetes, AWS ECS or Docker Swarm facilitates the creation of programmable toolchains that fully automate the application lifecycle from code build and integration to infrastructure deployment. Such toolchains are the linchpin of continuous integration and delivery (CI)/CD) development processes that can significantly reduce the cycle time from code to delivery while increasing reliability and repeatability.
DevOps Organizations Creation
Related to the previous point, containers facilitate the integration of developers and IT operations into DevOps organizations that can accelerate application testing and delivery. Containers aren’t a prerequisite for DevOps since they primarily represent a cultural and organizational structure, not a particular technology. But the nexus of developer and operations automation tools that containers promote does serve to break down barriers between the groups and foster cross-pollination. Containers require developers to understand infrastructure, such as the distinction between instances, pods, and nodes, so that they can break down monolithic applications into decoupled services that can be easily scaled on distributed container clusters. IT operations professionals also need to have a better understanding of containerized applications. With that knowledge they can properly size clusters and develop scaling and workload placement rules. Those rules maximize performance while minimizing resource contention between workloads with similar load profiles.
New Application Creation
Containers combine deployment agility, process automation, and application scalability, allowing developers to rapidly explore and prototype new ideas while encouraging them to design differently. A different approach helps developers take advantage of distributed container systems, along with their dynamic scalability and frictionless startup and shutdown processes. Techniques such as twelve-factor apps and microservice-based architectures outline a logical structure designed to exploit the advantages of containerized, cloud infrastructure and facilitate modular designs that can be incrementally and non-disruptively updated, easily scaled, designed to survive node failures, and migrated between infrastructure environments. Cloud– and container-native design thinking both rapidly reduces the cycle time from idea to implementation and increases reliability under a wide variety of conditions.
This article was adapted from SDxCentral’s “Container Infrastructure — What You Need to Know in 2018” report