The determination of which is better in Containers vs. Virtual Machines (VMs) depends on the user’s goal. Virtualization enables workloads to run in environments that are separated from their underlying hardware by a layer of abstraction. This abstraction allows servers to be broken up into VMs that can run different operating systems.
Container technology offers an alternative method for virtualization, in which a single operating system on a host can run many different applications from the cloud. One way to think of containers vs. VMs is that while VMs run several different operating systems on one compute node, container technology offers the opportunity to virtualize the operating system itself.
Benefits of Multiple Copies
The key characteristic distinguishing a container from a Virtual Machine (VM) is the container’s ephemeral nature. In a modern orchestration system, multiple copies of a container coexist. Containers that fail can be removed and replaced without noticeable impact on service. In the most radical environments, where continuous delivery methods are in place, new or experimental versions of containers may coexist with older versions. If an experiment fails, all the newer versions can be rolled back and replaced. These new and vastly different methods of managing data centers are key to the overwhelming interest the telecom industry has displayed in Docker and its associated technologies.
Evolution of Containers
The concept of containerization was originally developed not as an alternative to VM environments, but as a way to isolate applications in an AT&T Unix operating system. The FreeBSD Jails that followed allowed the computer system to be partitioned into many independent, smaller systems that could be assigned IP addresses, resembling modern container systems. In later Linux environments, certain containers could not access the hardware directly. However, the kernel was still responsible for execution of the workload, albeit through a layer of abstraction.
The notable early container system, Linux Containers (LXC), was not yet an environment; it was developed as only a part of Linux. In the early stages of Docker, until v1.10, the program was utilizing LXC as its containers.
Replacements for Hypervisors
Workloads within containers such as Docker are virtualized. However, within Docker’s native environment, there is no hypervisor. Instead, the Linux kernel (or, more recently, the Windows Server kernel) is supplemented by a daemon that maintains the compartmentalization between containers, while connecting their workloads to the kernel. Modern containers often do include a minimal operating systems such as CoreOS and VMware’s Photon OS — their only purpose is to maintain basic, local services for the programs they host, not to project the image of a complete processor space.
Comparison of how containers and virtual machines are organized. Source: Docker
Microservices are an architecture method first employed at Google and at video streaming service provider Netflix. They are functions that can operate without exclusivity to any single application. They perform small workloads in the form of functions that can be contacted via APIs, and that produce discrete outputs. Such functions exist in traditional, monolithic applications today, although multiple applications instantiate these same functions redundantly. In a microservices architecture, these functions are more like libraries, providing service to any and all applications that require them.
Scalability of containerized workloads is a completely different process from VM workloads. Modern containers include only the basic services their functions require, but one of them can be a web server, such as NGINX, which also acts as a load balancer. An orchestration system, such as Google Kubernetes or Mesosphere Marathon, is capable of determining, based upon traffic patterns, when the quantity of containers needs to scale out; can replicate container images automatically; and can then remove them from the system.
A VM is a software-based environment geared to simulate a hardware-based environment, to best serve the applications it hosts. Conventional applications are designed to be managed by an operating system and executed by a set of processor cores. Such applications can run within a VM without any re-architecture.
With VMs, a software component called a hypervisor acts as an agent between the VM environment and the underlying hardware, providing the necessary layer of abstraction. A hypervisor, such as VMware ESXi, is responsible for executing the VM assigned to it and can execute several simultaneously. Other popular hypervisors include KVM, Citrix Hypervisor, and Microsoft Hyper-V. In VM environments, modern processors are capable of interacting with hypervisors directly. VM environments also include sophisticated network virtualization (NV) models such as VMware NSX.
The scalability of a VM server workload is achieved in much the same way it is achieved on bare metal: on a web server or a database server with the programs responsible for delivering services distributed among multiple hosts. Load balancers are inserted in front of those hosts to direct traffic among them equally. Automated procedures within VM environments make such load balancing processes sensitive to changes in traffic patterns across data centers.
Updated May 2019