Container software systems such as Docker are increasingly being used to create large platforms for distributed applications that can run in the cloud and driving the need to understand the need for Container Networking. The primary advantage of containers is they can run large, distributed applications with low overhead by sharing a stripped-down operating system (usually based on Linux), an approach that can be more efficient than using virtual machines.
Containers, compared to virtual machines (VMs) such as those produced and managed within VMware environments, are much simplified. They often contain minimal Linux operating systems such as CoreOS or, more recently, a minimal Windows Server system called Nano Server. But in order to function properly, container systems need networking functionality to connect distributed applications across the cloud.
Birth of a Distributed System for Container Networking
A container is produced from a prepared image gathered from a designated registry, such as Docker Hub or another private registry. Think of this image as a snapshot of just the software needed to run a particular workload, along with only the software it depends upon to function and nothing else. When the daemon instantiates the container, it assigns a unique network address to it, connecting it to a virtual Ethernet bridge, which in the case of Docker is called docker0. All containers in the system communicate with each other by directing packets to docker0, which then forwards those packets through the subnet automatically. CoreOS’s container runtime, called Rocket or rkt, handles networking in a very similar fashion.
A Layer 3 router may be used to direct all container-bound traffic to a default gateway that is managed completely by the container runtime system, bypassing the conventional Layer 2 switch (see Figure 1).
This containerized software system can be used to build new types of applications, constructed of microservices connected to one another by means of network virtualization. A conventional application inhabits the addressable memory space of a processor or set of processor cores. There, the operating system facilitates communication through a sophisticated, but often slow and tedious, procedure. With microservices, each container behaves as though it were its own Internet site, complete with its own Web server. So containers can address each other’s workloads through API calls, which are parsed by a DNS server designated by the container daemon.
As a virtual subnet, a Docker or Rocket container network is a type of software-defined network. Docker was among the first companies to recognize the significance of this fact. In March 2015, it acquired a startup firm called SocketPlane, which had begun developing an SDN controller exclusively for Docker container networks, bypassing the physical NIC. Just three months later, SocketPlane’s technology was integrated into a new Docker plug-in called libnetwork.
Separating the Docker subnet from the underlying network hardware is extremely important, because it enables entire container networks to be migrated between platforms, including from private to public clouds, without modification. Technically, container networks can now span multiple platforms — for example, with principal services running on-premises while overflow containers are instantiated on a public cloud, such as Amazon’s AWS.
Since many components of a Docker environment are designed with “pluggable” architecture, they can be swapped for alternatives. Weaveworks produces one such alternative called Weave, which utilizes a sophisticated SDN methodology that employs fully encrypted packets, capable of traversing firewalls (see Figure 2). A Weave network is deployed in such a way that internal resources outside the container network, such as databases and server logs, become addressable by containers as though they shared the same network. Likewise, internal services running from inside containers may be exposed to outside resources over a secure connection maintained and monitored by Weave.
Weaveworks’ engineers are partly responsible for the creation of a new networking model within Docker itself, enabling third-party plug-ins to be easily distributed and installed. These plug-ins extend the functionality of the Docker environment. One key example involves moving a website published by containers from one platform to another, while maintaining its running status. Such a task calls for a plug-in to maintain the data and databases used by that site, as Web server containers are removed from one platform and re-instantiated elsewhere. This is done by way of two Docker plug-ins, Weave and ClusterHQ Flocker, a persistent data storage service. The capability for both plug-ins to address each other was devised by Weaveworks, ClusterHQ, and Glider Labs.