While it is somewhat in vogue to paint cloud and multi-cloud as movements based primarily on reducing costs, the reality is that when the transformation is complete operational agility will actually dominate the scoresheet. Enterprises are paying for the operations, not the servers, when adopting multi-cloud practices.
With multi-cloud viewed as the operational end game, success will not be determined by moving a few key workloads to a public cloud service. Instead, it will be characterized by the adoption of an operational framework that’s capable of exploiting the advantages of a multi-cloud world.
Multi-Cloud as an Operational State
Multi-cloud is the unified management of disparate underlying infrastructure domains as a single cohesive set of resources. It is defined both by the use of diverse infrastructure pools and by the singular approach to managing those resources. But as a technological foundation, multi-cloud is more an operational condition than a description of the core infrastructure.
In a networking context, multi-cloud is about providing seamless connectivity across a network that spans multiple clouds. These clouds can be a mix of public and private, on-premises, and off-premises. But the distinguishing characteristic is that they are operated as a single entity. In this regard, though multi-cloud has requirements for underlay transport, it is built primarily around higher-layer functionality.
Because multi-cloud requires a unified management approach, there is a sharp distinction to be made between multiple clouds and multi-cloud. The promise of multi-cloud is not about simply fracturing the IT environment into cloud-specific shards that encompass infrastructure and operations for bounded domains. Multi-cloud is about managing resources as a single, cohesive infrastructure — regardless of whether they reside in a private cloud, public cloud A, or public cloud B.
Given the expansive nature of multi-cloud — extending end-to-end across multiple clouds and the on-ramps to those clouds — multi-cloud will need to be a multi-vendor proposition. Enterprises will not have the luxury of replacing equipment across all corners of their infrastructure under one vendor to achieve a unified operating environment.
This will naturally lead to the rise of multi-cloud orchestration platforms capable of translating application policy into whatever format is required for a particular supporting component. Be it public or private, physical or virtual, virtual machine (VM) or container, these orchestration platforms will need to apply policy uniformly, serving as an abstraction to the underlying infrastructure.
Of course, this is only possible if multi-cloud operations are built around standards-based approaches to pushing policy into the infrastructure. Whether it’s protocols like network configuration protocol (NETCONF), vendor-neutral data models like OpenConfig, or open source libraries like NAPALM (network automation and programmability abstraction layer with multi-vendor support), orchestration will need to be built on an open platform.
From Orchestration to Automation
But orchestration itself is really only part of the endgame. In a multi-cloud world, workloads are not static, which means orchestration must become a platform for automation.
The most basic premise of automation is see something, do something. That essentially means automation is less about shortening the time to key in a change and more about dynamically reacting to events in and around the infrastructure. While the emphasis tends to be on the do something part of automation, there is no transformative change without first being able to see something.
As long as multi-cloud is about end-to-end control, end-to-end monitoring will lie at the heart of multi-cloud.
In order for an automated multi-cloud environment to work, data must be accessible and robust. The more data in the hands of the automation systems, the more workflows that can be automated.
Accordingly, automation architects should be intentionally deploying monitoring and visualization tools that expand the data. Combining device-level information about compute, storage, and networking with application and user data grants a broader view of what is occurring within the infrastructure.
And when done in real time, this becomes the key to unlocking a dynamic multi-cloud environment.
Architect for Visibility
While multi-cloud begins with cloud for most enterprises, the truly transformative benefits are somehow more foundational. They begin with operations, which itself begins with monitoring.
As enterprises architect their multi-cloud futures, they need to make sure that the operations teams are involved at the outset. Having an explicit data strategy will feed requirements into the supporting devices and tools. What data model support is necessary? What APIs are going to be integral to any automation framework? How do tool choices fit in an end-to-end operational model where domain boundaries do not exist?
The Bottom Line
Of course, these types of changes likely begin even earlier. Enterprises that do not have key architects with multi-domain, multi-vendor knowledge might be ill-equipped to begin their transformation. For many, multi-cloud will begin with retraining, re-skilling, and ultimately re-tooling their operations teams. The good news is that, for most, the transition is just beginning. There is still time to get this right.