Policy is quickly emerging as the next “superhot” topic in IT, but along with it comes a sense of SDN déjà vu that its definition, importance, and the significance of ongoing open efforts in this area deserve some clarity.
In many cases, the term “policy” is used to describe low-level configurations for networking devices. But policy encompasses so much more — it’s essentially how business logic is enforced by the infrastructure, and it goes hand-in-hand with security.
So, how should we think about policy? For starters, policy originates inside a human’s brain and isn’t something a system can intuit, says Martin Casado, chief technology officer of networking and security for VMware.
“Policy isn’t a networking, storage, or compute thing,” Casado points out. “Humans have ideas of how a system should run, so they convert these ideas into documents, a.k.a. policy, to describe it. Policy then touches all parts of IT infrastructure and ultimately determines who has access to which resources for purposes of security, compliance, efficiency, optimality, etc.”
Why is the definition of policy unclear? “One reason policy is a tricky problem is because it’s multifaceted, and vendors are using the term in different ways to describe its various facets,” says Mike Cohen, director of product management for Cisco’s Noiro Networks.
“User intent is one of the facets of policy,” Cohen adds. “When we describe Cisco’s ACI [application-centric infrastructure] and the work we’re doing in OpenStack and OpenDaylight, it’s very focused on a policy model designed to capture user intent. It essentially shifts the abstraction models away from people requesting network resources in terms of how to configure switches and converts it to an abstract language that actually maps to how applications are built.”
The Policy Problem
One of the dirty little secrets of automation in the cloud is that it doesn’t actually remove humans from the loop in terms of policy. Even if you use OpenStack, Chef, Puppet, or any other automation system, you still need to ensure that humans accurately enter and configure policies.
This means that state-of-the-art policy is sort of stuck in the Dark Ages, where many people are still reading documents and converting them into low-level configurations and then managing them across dozens of devices.
“In a utopian state we’d just talk to computers and they’d understand and do the right thing…that’s the holy grail of IT…but we’re still so far away from it,” says Casado.
So the policy problem is: How do you take human or user intentions and make a computer system act to them? Put another way, it’s about talking to many incompatible pieces of infrastructure.
“Traditionally, although we knew how to accept declarations at a high level, it was too difficult to map those down in an automated way to all of the various components and heterogeneous feature sets and interfaces,” Casado says. “But this problem of mapping through all these low-level devices was solved by virtualization of the infrastructure — which enables high-level abstractions and unified abstractions.”
Now, the provisioning and management of all of these devices data-center-wide can be done through a single API rather than a bunch of distributed devices.
“So we’ve introduced this virtualization layer and, whether it’s a Docker system or KVM, we’re starting to virtualize the infrastructure and [we’re] exposing it through a single API, which means that a lot of complexity of distributed state management and mapping-out of all of these low-level interfaces has gone away,” Casado notes.
That was one of the primary bottlenecks for policy in the past, so it’s now possible to build a policy system that focuses on compiling high-level declarations down to abstractions that the virtualization layer can use.
“All of my SDN work came from wanting to make declarations at a high level for security policy and needing to make the network enforce it. One of the uses of SDN is to provide abstractions, so we’ve reached the point where we can do the dream of IT, which is a high-level declarative approach,” Casado says.
Open Policy Is a Good Policy
As layers are abstracted or automated away by cloud management systems, it’s important to be aware that the new policy layer is naturally the next potential level of “lock-in.” Why? It’s where the business layer will be, and it’s also what you model code around and where you interface with a system.
This makes open efforts around policy essential.
“One reason we might see vendor lock-in of the policy layer is how the pieces beneath it are orchestrated,” says Andre Kindness, principal analyst for Forrester Research. “The way you manipulate the policy interface is going to be directly related to the underlying set of components. This is why so many companies are pushing for openness from the policy level through the components.”
Policy is “an incredibly important domain and, within the Open Source community, it isn’t a domain where a particular vendor dominates or essentially builds proprietary models and APIs,” explains Cohen. “These models and APIs need to be open. Vendors will always build different things and they’ll be able to compete on the strengths of their components — but it’s critical that these components be built around open architectures and open APIs. We’re making a huge investment here because we realize it’s an essential component.”
VMware also recognizes openness as essential to the future of networking. “It’s heartening to see multiple open efforts in this space — because it’s so important for the industry to continue to move forward in an open way,” Casado says.
Policy touches all pieces of the infrastructure and is necessary at every layer—over networking, storage, and compute. The catch? Different policy models make sense for these different pieces. Some models are declarative, while others are group-based. For networking, for example, a group-based policy model makes sense.
This is where Congress comes in. First, it provides a layer that allows you to declare policy over multiple components, like compute and network and storage and the apps space — it’s very general and can be extended to cover everything.
Second, it reconciles different policy models because it acknowledges that every subcomponent is going to have its own policy engine.
“Let’s say the networking piece has group-based policy, and the compute and storage pieces each have their own policy engines. Congress works to align them,” explains Casado.
What’s the project’s status? “We have a client out that builds and works, so we’re focusing on building up the basic components to support using a declarative language and some way of pulling information from other data sources and doing some kind of activity on it that’s maintaining state,” says Sean Roberts, board director and project leader for the OpenStack Foundation.
Roberts describes openness as a “critical” part of Congress. “The discussions we’ve had with companies who are highly competitive — yet interested in collaborating — are encouraging. We actually expected a lot of pushback and received none. In fact, most of the discussions center around ways to design this together.”
It’s still early for the project, which is expected to undergo many changes during the next couple of years.
“I view it as the beginning of making products like OpenStack enterprise-ready,” Roberts says. “Obviously, a lot of work remains to make it truly ready for enterprises, but the ability to set the state of the cluster and describe it in a simple declaration is pretty powerful. Once we get this up and rolling, it’ll be a big step in the enterprise direction — for a whole lot of customers.”