The Cloud Native Computing Foundation (CNCF) is gearing up for a push to make the Container Network Interface (CNI) an industry standard.
CNI is already supported by Kubernetes, the Cloud Foundry Foundation, and Mesosphere. But Docker developed its own virtual network overlay. However, as adoption of container applications continues to accelerate, CNCF Chief Operations Officer Chris Aniszczyk said the time has come to standardize on a common interface to foster network interoperability at a time when most container networking technologies are open source.
“The network overlay is essentially free,” said Aniszczyk.
He expects the effort to standardize on CNI to occur in parallel to the work the CNCF is doing to standardize storage orchestration engine interfaces for containers as well.
Mark Church, a solutions architect for Docker, Inc., said there’s been some discussion within Docker about adding support for CNI. But he added that no one should expect to see any movement in that direction any time soon.
“The short answer is not yet,” said Church,
Charlotte Dunlap, principal analyst for GlobalData Technology, a market research and consulting firm, said the combination of a lack of container networking standards plus the relative immaturity of the technologies themselves scares some people away.
“One of the major barriers to adoption for containers is provisioning,” said Dunlap. “And networking is at the heart of the problem.”
In fact, Nick Shadrin, a technical solutions architect for NGINX, a provider of open source load balancing software, said there are still many container networking issues that remain unresolved.
“There are still a lot of stability and reliability issues,” said Shadrin. “You don’t see high performance applications running on containers.”
One of the early adopters of container networking is Visa, which has one application in production. But that application still relies on Docker Bridge networking technology versus the network overlay technology that Docker has developed. Sasi Kannappan, Visa’s chief system architect, told attendees at the DockerCon 2017 conference this week that Visa does plan to implement Docker network overlays.
“We’re not running Docker network overlays in production yet,” said Kannappan.
Kannappan said Visa felt the need to modernize its network to dynamically scale as application usage changes over time. The reason Visa chose Docker Networking is that network overlay is embedded in Docker Engine, which Kannappan said greatly reduces operational overhead in a way that makes networks more responsive to the dynamic nature of container environments. In addition, Kannappan said Visa doesn’t need to deploy third-party load balancing software within its Docker container application because Docker Networking provides that capability.
To integrate that container application with other applications, Kannappan said Visa is relying on a Consul DNS server from HashiCorp.
While clusters are built using Kubernetes or any of the flavors of Docker that have their own internal networking capabilities, most of those clusters are going to be integrated with other applications using virtual network overlays. In time, many of the providers of those network overlays will develop plug-ins for Kubernetes and Docker. But until container networking matures a lot more, Docker containers are going to be running across more legacy networks than Docker would probably prefer.