The world of application and IT infrastructure has evolved dramatically over the past five years. As containerized application development has risen in popularity, so has the adoption of container orchestration technologies like Kubernetes. Like most open source software, container management and Kubernetes do not simply work with a push of a button — they require expertise. DevOps teams face a steep learning curve with best practices related to deploying, running, and managing the lifecycle of a containerized application.
Managing these types of distributed systems requires application domain knowledge to correctly scale, upgrade, and re-configure while protecting against data loss or unavailability. The ideal solution is for this application-specific operational knowledge to be encoded into software that enables powerful Kubernetes abstractions to run and manage the application as intended.
Operators are pieces of software that do just that. An Operator encodes this domain knowledge and best practices for deploying and managing container-based applications. Think of an Operator as a software version of your star DevOps engineer. This person is always able to ensure you have the best possible infrastructure to build an application. By automating this function, Operators allow you to then scale your humans better — empowering them to focus on higher valued work rather than mundane tasks and firefighting like restarting and shutting down application instances, managing upgrades, or reacting to failures.
At Red Hat, we’ve seen firsthand how Operators can streamline Kubernetes infrastructure. Operators allow admins to extend Kubernetes with customized abstractions, on par with the default abstractions Kubernetes contains for stateful services and deployments. An engineer simply describes their desired state, such as “an instance of version 2.0 named staging” instead of installing all components separately, matching up IP addresses, and other minutiae.
However, when it comes to leveraging Operators there are two primary areas that cause ops teams to need more help. First, building Operators requires deep expertise. Most organizations do not yet have the tech skills to efficiently build Operators, which can impact their ability to leverage containerized technologies to build applications. Second, running many Operators creates a lifecycle management problem. If different teams want to run Operators, your teams then have to deal with upgrading them on independent schedules and limiting access for management.
A Toolkit for Kubernetes Native Applications
The Operator Framework, created by the CoreOS team at Red Hat, is an open source toolkit that is designed to manage Kubernetes native applications (Operators) in a more effective, automated, and scalable way. The framework is made up of three primary components that are key to enabling enterprises to accelerate Operator development.
First, is the Operator SDK, which enables developers to build, test, and package Operators based on their expertise without requiring knowledge of Kubernetes API complexities. It not only facilitates the marriage of an application’s business logic (for example, how to scale, upgrade, or backup) with the Kubernetes API to execute those operations, but also allows engineers to make applications smarter and have the user experience of cloud services.
Next, Operator Lifecycle Management oversees installation, updates, and management of the lifecycle of all of the Operators (and their associated services) running across a Kubernetes cluster. With it, administrators can control what Operators are available in what namespaces and who can interact with running Operators. They can also manage the overall lifecycle of Operators and their resources, such as triggering updates to both an Operator and its resources.
Third, Operator Metering helps you gain more knowledge about the usage and costs to run and manage Operators. It is designed to enable Kubernetes operations teams to be able to associate the cost of their underlying infrastructure to the applications running on their Kubernetes clusters in a consistent way, across any environment, be it on public cloud infrastructure or on premises.
Tips for Your Containerized Application Development Strategy
As specialists in Kubernetes and container management, we are often asked what tips can we offer to companies embarking on a containerized application development strategy. In order to get the most out of your Kubernetes-based applications, while not burdening development and ops teams with time-intensive processes, it’s important to consider how you are leveraging your team’s attention and automate where possible.
For example, in Kubernetes you have primitives, which are a set of building blocks that collectively provide mechanisms for deploying, maintaining, and scaling applications. These need to be stitched together. A framework that abstracts this level of complexity enables you to focus on higher valued tasks like developing the business logic that your application needs to be effective.
Another tip is to focus on access controls. As previously mentioned, the inability to secure a cluster can wreak havoc on your team and environment. With roles-based access controls, you can grant teams different levels of access to the cluster (isolation from a cluster, admin control for certain teams) — which is critical for enterprise teams often times running various workloads on shared clusters.
Lastly, don’t take the approach of “set it and forget it.” You need to own the entire lifecycle of the application to provide the best possible environment for your developers to thrive.
In partnership with the wider Kubernetes community, we have been laser-focused on making Kubernetes more secure and stable, and easy to manage. We hope that the new open sourced Operator Framework helps Kubernetes administrators and addresses the next frontier of simplifying the building and management of Kubernetes native applications.