The Software Defined Data Center (SDDC)— A Regulated Enterprise Perspective
The definition of what a data center is has changed in the past couple of years and shifting to the SDDC. Many software applications running at Internet scale have no notion of a physical data center; they know only the platform APIs and SDKs that they program against, or provision into, and the “data center” as such has become a resource-execution venue, truly a “cloud.”
This is because the notion of a warehouse operating system (such as Google, IO’s IO.OS, Amazon’s EC2, and a number of private cloud offerings from traditional data-center equipment vendors) has standardized the underlying facilities and resources and exposed services that can then be automated and orchestrated from proprietary or heterogeneous cloud brokers.
Market intelligence suggests that in the next five years, the notion of “owning” physical data centers will be replaced with a set of forward-demand projections for virtual compute, virtual storage, logical security, and virtual networking. This demand will be expressed in terms of capacity and capability. The execution venues for software will be themselves software platforms, whether it be Google, Amazon, Rackspace, Savvis, Equinix, or private cloud à la Dell, IBM, HP, Fujitsu, et. al., or more radical solutions such as IO.
In many enterprises, it has not been possible to leverage these platforms because of physical separation of security zones, policy requirements, regulatory requirements, and so on, that have led to resource proliferation and a lack of efficiency in the use of key compute, storage, security, and network resources, trapping unused capacity within security zones. Within the past year or so, technologies have begun to emerge that address this issue by providing policy enforcement logically or virtually through software. This is a key development that will lead to a much broader adoption of software-defined data center (SDDC) solutions in regulated industries.
Paying Heed to Policy
Once you view the “data center” as an execution venue for application services to be deployed onto technology resources, several potential operational models emerge. The most important preparation stage for a regulated business is to understand the workload’s policy requirements. These policies can be expressed in terms of jurisdictional, regulatory, performance, capacity, security, process relationships, and so on. Some examples:
- Jurisdictional: Client data for Singaporean domiciled individuals must be housed in Singapore on systems dedicated to ABC Corp.
- Regulatory: Applications that operate on Singaporean client data must be hosted within Singapore.
- Performance: Latency between all equity trading applications cannot exceed 2 milliseconds.
- Capacity: This equity trading application is certified up to 1 million trades per hour.
- Security: This equity trading application must be firewalled from rest of the back-office environment.
- Process: These three equity application must all be hosted in the same execution venue inside the Singaporean country boundary.
These requirements reduce the applicability of public cloud to only a small number of workloads in a regulated industry like finance or healthcare. This is because the public clouds may not be present in the jurisdiction, they may replicate data between jurisdictions, or they may not be able to meet the security requirements or provide the policy enforcement capabilities required for the workload. The requirements also lay out the basis for use cases that a truly software-defined “data center” must support.
In this SDDC, each resource type (storage, compute, security-by-service, network) must be exposed as a set of pluggable services with service-level agreements (SLAs), costs, and management and monitoring capabilities. These will be assembled into blueprints or design patterns which contain security enforcement points and the specific implementation details around how a particular execution venue enforces a policy.
For instance, let’s assume that the jurisdictional policy above also requires that data at rest and data in motion are encrypted at some minimal level. There are many different ways to implement this via traditional networks (TLS or GRE, for example) or through software-defined networking (SDN) means. Since the security services are themselves part of the data-in-motion flow, they also have to implement data-in-motion enforcement that meets the underlying logical requirement (and policy) associated with the flow.
It seems likely that cloud brokers, service proxies, et. al. will solve this problem across all execution venues in a fairly consistent way and that, over time, they will also leverage underlying enforcement capabilities that are built into the execution venue as these capabilities improve and become more broadly accepted by risk, audit, and regulatory bodies. They could then move away from an overlay model for these controls to leverage the core execution venue’s capability.
In Case of Disaster
Another extremely interesting use case that is being taken advantage of is the capability to easily move workloads from one execution venue to another through re-provisioning, snapshotting, and so on. This completely revolutionizes the way that disaster recovery planning can be done, but actual implementation and acceptance as a valid means of recovering could be a long way off.
Ironically, these mechanisms can provide a really important vehicle for recovery to a cloud environment, even for existing production/DR two-site implementations at a vastly different price point. As climate-change volatility changes thinking around options for disaster recovery, it is creating a secondary (options) market for compute venues.
In my recent role as group CTO of UBS, a lot of work was done to built out a software-defined data center and deploy Dev-as-a-Service and begin the move into automating Test-as-a-Service environments. In order to make the leap into production and support security zones via policy, the logical/virtual security capabilities and the detailed technical solutions to meet the control environment policy requirements are essential.
A number of interesting new technologies are emerging in this space, including software-defined storage, SDN, and software-defined security. The trick is going to be to not “silo” these solutions, but treat them as building blocks in a front-to-back design that meets policy needs.