The recent push by Sylabs in offering an enterprise version of the high-performance computing (HPC) focused Singularity container platform underscores the continued pervasiveness of containers into more computing realms.
Singularity evolved as a way to use container architectures in HPC environments and scientific use cases. Those environments typically require more advanced security measures due to the potentially sensitive nature of data used in research and the need to be compatible with on-premises HPC systems.
Sylabs CEO Gregory Kurtzer said that compared with traditional Docker-based container architectures, Singularity includes better security due to the ability to run a container without granting users control of a root-owned daemon process or kernel feature; easier mobility of content within a container through the use of a single-file format that includes the runtime environment; and support for high-performance hardware commonly used by research labs.
One research lab that has started dabbling in the use of containers in an HPC environment is the National Center for Atmospheric Research, which is based in Boulder, Colorado. NCAR is a federally funded research and development center that caters to service, research, and education in atmospheric and related sciences.
To facilitate the analysis of complex data sets and create the corresponding graphical models, NCAR taps into a sophisticated supercomputer system that is housed in a facility about 100 miles north in Cheyenne, Wyoming. That 5.34-petaflops supercomputer is powered by 145,152 Intel Xeon processor cores in 4,032 dual-socket nodes, and includes 313 terabytes of total memory.
Normally, users tap into that on-premises supercomputer for advanced computational tasks. However, for more mundane tasks, some of NCAR’s approximately 2,000 users are beginning to use containers as a way to run simpler analysis.
Davide Del Vento, a consulting services software engineer at NCAR’s computational and information services laboratory, said those using containers at the facility are doing so because either it helps them with some analysis work or improves workflow.
“At this point it’s a limited percentage of users,” Del Vento said. “They have either seen containers at an event or heard about them and want to take advantage of what they can offer. I anticipate it will grow as we are seeing how it’s growing in the enterprise. I suppose it will grow in our environment as well.”
In terms of which container platforms are being used, Del Vento said users typically ask for Docker. However, that option limits computational support as NCAR does not yet allow just any user to run traditional Docker containers in its HPC environment.
“We can’t give users Docker on HPC,” explained Nate Rini, a software engineer at NCAR. “It gives them root access, and that just can’t happen.”
Rini did note that trusted administrators were using Docker on HPC, but that is still limited.
Rini explained that security is the main reason that it can’t provide users with root access. If a user were to import a compromised image to run in a container, it could spread throughout NCAR’s complex computational framework.
Rini did note that NCAR has been working a bit with various container platforms. He said this includes NCAR having run Singularity on a test system “for some time, but it has similar security issues as Docker.”
“If you trust your users you can let them do whatever they want and it acts like Docker,” Rini said. “Worst-case scenario is that you take out the test machine and are sad for a little while. But, that’s what they are there for.”
Rini said that security issues were becoming less of a concern, citing a lot of active work by developers on the kernel. This includes work in both the Singularity and Docker communities.
In an environment with access to basically unlimited computational power, Del Vento said that performance can also be an issue when using containers. But, sometimes a user places convenience over speed, which results in them running some of their analysis on a Docker container running over a traditional public cloud.
Rini added that this is one of the tradeoffs that must currently be made for NCAR users when deciding to use containers.
“Containers will make life easier for users that don’t want full performance,” Rini said. “If the user is trying for every last flop they can, it might make their lives harder” as the analysis will take a lot longer.
Rini did note that Singularity is fast compared with Docker as it “does not do any network shenanigans like Docker does.” He explained that Docker makes an isolated network that the container sits in and has a very specific path in and out of that network.
Other HPC Options
Rini said NCAR has also developed its own container platform that is more focused on the needs of its users. The platform, cheekily dubbed Inception, is a lightweight container runtime primarily targeted at HPC.
There are also a number of other research-focused container platforms available designed to shore up security and speed issues. These include Shifter, which was developed by the National Energy Research Scientific Computing Center (NERSC), and Charliecloud from the Los Alamos National Laboratory.
Rini said he expects the use of containers at research facilities and in HPC environments to increase over the next couple of years as progress is made in terms of security and speed.
“I think there is a bright future for containers,” Rini said.
Photo courtesy NCAR/UCAR Computational & Information Systems Lab.