It has to do with what’s called “immutable infrastructure,” as Eric Brewer, Google’s vice president of infrastructure, explained during today’s keynote at GCP NEXT, the developer conference for Google Cloud Platform.
The term refers to infrastructure that remains fixed after deployment. An ordinary switch or router would count as immutable. So do containers, since you don’t change the container itself after deploying it.
Pods — Google’s term for collections of containers — count as immutable, too, Brewer said. Once a pod is in place, it has a stable IP address.
With Google Deployment Manager, available in Kubernetes 1.2, Google has decided to create immutable clusters — “cluster” referring to the collection of nodes that container pods get deployed onto.
Google’s infrastructure looks at servers and storage as a “sea of resources,” so the idea here is to create a template for a multinode deployment, Brewer said. To stretch the analogy a little too far: Rather than fish through the sea to gather resources for every cluster, you could grab a copy of the template and have it do the work.
It’s an approach to configuration that would be simpler than using command-line interfaces, which tend to be based on homegrown languages. Moreover, “If I add a new thing I want to configure, I have to add it to the language,” Brewer said.
Scripting is easier than using commands, “but it’s very fragile. We find that if something stops in the middle of an initialization script, now you’re in this weird state” and probably stuck.
Containers, Graphs, and Volumes
Deployment Manager involves creating a graph (a structure defining which nodes are connected to each other) and then deploying it.
Graph creation is done with a declarative framework — that is, rather than writing a script or issuing specific commands, you use a language to describe what you want in the graph. Google’s choice was a mildly modified version of YAML, Brewer said.
The graph defines how applications will be packaged. The key to avoiding the above-mentioned problem with scripting is to put the configuration information in a separate volume on the servers — drop it into a separate box, essentially. The configuration will be applied only at the time the containers are deployed.
This has the beneficial side-effect of letting operators peek at the configuration information. That couldn’t be done before because once a job was running, Google couldn’t find out how it was configured, Brewer said.
As an aside, Brewer mentioned that the separate configuration volume can also be where encyrption keys should be stored. More specifically, he said it’s really dumb to store encryption keys inside a container, and that they should always be somewhere else — such as, maybe, this configuration box.
Deployment Manager is available in open source form, and it’s been added to the Helm project in Kubernetes.
By the way, the 1.2 version of Kubernetes includes a push forward in terms of scale when it comes to container orchestration. Kubernetes can now handle 1,000 nodes and 30,000 pods, Brewer said.
“It really means that scaling is not a practical limit for any team. I’d be really surprised if you need more than 30,000 pods. If you do, we’d be happy to host you.”