Containers remain a tantalizing challenge for enterprises looking to garner further efficiency from their cloud deployments. While the concept behind containers is solidifying, enterprises are still struggling with the actual deployment process.
Nell Shamrell-Harrington, senior software development engineer at Chef, said she has seen a number of mistakes made by enterprises as they move to deploy containers. Many of these revolve around a basic premise that enterprises are still not sure how containers differ from more traditional virtual machines (VMs). This results in them using containers in a method they were not designed for.
VMs are basically just a software extraction of a computer that lives in the cloud. This includes a separate storage and memory system dedicated to that single VM. This capability is the most familiar to organizations as they can easily associate the one-to-one connection.
Containers are more finely split, leveraging a host storage and memory system to run multiple computers, or pods. This allows for much less overhead as each container only siphons off the resources needed to run a function.
Shamrell-Harrington found that after sifting through enterprise deployment challenges, she has found four of what she termed significant “anti-patterns” that are making container deployments more difficult.
“These anti-patterns are not a shaming, but more of a guide to diagnosing why an enterprise is feeling deployment pains with containers and that there is a way out of that pain,” Shamrell-Harrington explained.
The first issue Shamrell-Harrington noted was the challenge of dealing with persistent data within a container. Containers by their nature are not designed as a data repository as are VMs, yet enterprises continue to want to use containers as if they can maintain data.
Shamrell-Harrington explained that enterprises that look to tie persistent data to a container deployment are missing the entire point of containers, which is that they can be spun up and terminated quickly. Basically, no strings attached.
“You can’t afford to lose a container if you have persistent data sitting in a container,” Shamrell-Harrington said. “You end up losing the speed and flexibility in using containers … Once you have that container in production it can never go away and you can never replicate from that container image.”
She explained that attempting to maintain needed data within a running container leads to that container just growing in size and becoming something that can’t ever go away. “If you lose that container, you lose all the data within,” Shamrell-Harrington said.
For enterprises that are encountering this issue, Shamrell-Harrington said they can take a snapshot of the data in that container and then export that data into a separate container storage database. This will allow for that original container to become ephemeral and regain its deployment flexibility.
The data issue also ties into the challenge of attempting to back up a container. Shamrell-Harrington said enterprises look to back up a container because that it what they do with their VMs.
“If you are backing up a container, there is something persistent or unique in that container that makes that necessary,” she said, noting that model is counter to what a container is meant for.
Shamrell-Harrington said an enterprise can fix this issue by spinning up a new container from an existing image and test it to see what’s missing. She said this can be done either by manual testing or starting “bit by bit in diverting traffic from the old container to the new one and see what’s wrong.”
She noted that this should be done in a staging environment so as to not disrupt the running application. “Don’t divert all traffic to the new container initially, and hopefully you have monitoring software that can see if there are any errors,” Shamrell-Harrington explained.
Shamrell-Harrington has also found that enterprises are often trying to include too much inside of a container. The most common issue is trying to run an entire operating system within a container.
“Containers are fast because they are not made to contain an entire operating system, for instance,” Shamrell-Harrington said. “By putting too much inside of a container, it robs the quality of having a container and can also introduce security vulnerabilities because that threat area is now much larger with a large OS inside of the container.”
She explained that by installing only the parts that an application needs to run inside of a container, the threat area is reduced.
For enterprises that have already run down this path, but are not sure how much of the full OS to back out, Shamrell-Harrington said they should begin with the existing container that is housing the entire OS. “Start a new container with just the application and see what it needs,” she explained.
Finally, Shamrell-Harrington said the fourth big issue she has seen is enterprises attempting to perform maintenance directly inside of a container. She noted this is again an issue of enterprises treating a container as they have historically treated VMs.
Performing maintenance within a container typically involves making manual changes that adds to the overhead of a running container. This makes it much slower to spin up a new container.
Shamrell-Harrington said enterprises should make any maintenance changes for a running container from the container image instead of from within the running container. They can then spin up a new container using the altered image.