The intriguing concept of composable infrastructure has been discussed by several individuals in the IT industry over the past few years. Composability refers to the idea of building infrastructures where all datacenter resources reside in pools where applications can pull what they need to run the most quickly and efficiently and then return what they don’t. Essentially, composability means that datacenters should be defined by the needs of applications. Customers are demanding approaches that work across many vendors and many technologies, which, in turn, require solutions that are simple, open, standards-based, and scalable.
While full composability does not exist today, the industry is heading in that direction, and the light at the end of the tunnel is quite near. Presently, there’s a moderate number of companies claiming true composability and the nirvana end state without a full explanation of the infrastructure and capabilities. I’ve spoken with several organizations that have shared that this is creating a good deal of confusion.
Don’t be mistaken, we have arrived at a measure of “composability,” it’s just not fully available. For example:
- It’s possible to group servers logically and manage them as a single resource. This is what converged infrastructure delivers.
- Similarly, storage can be grouped into pools either via software-defined storage (SDS) solutions, traditionally in a SAN, or via SAS Zoning.
- There are startups tackling PCI express (PCIe) disaggregation and NVMe via NVMe-oF.
- Finally, the network can be managed and composed logically in a solution like VMware NSX or through other SDN solutions.
Despite the progress made, “composability” cannot be fully realized until systems are completely disaggregated and stitched back together to form pools of resources, where applications and workloads can be orchestrated. Because key memory-centric resources are still trapped inside the server, anyone claiming to offer composability today is talking about a partial solution.
Today’s approach requires a number of interfaces and technologies: ethernet or fibre channel, PCIe, and SAS. It’s quite a bit to orchestrate, plus there’s the cost of these disparate technologies, and none of them tackle the memory-centric world that is emerging. True composability requires the underlying data center infrastructure to be able to be disaggregated and broken down into its fundamental parts using a common technology.
Getting to a fluid, dynamic infrastructure is a journey. Working with customers and the technology industry is accelerating this shift to re-invent both hardware and software to align with the vision for composable infrastructure.
Currently, storage and networking can be composed, but this is through input/output (I/O) composability, not memory-composability — and it’s important to understand the difference. The I/O-composable world all lives behind a protocol stack, whether it be TCP IP or a file/block protocol, and it is enabled by the I/O subsystem. True composability must also include the memory domain.
However, there are new emerging “sub-microsecond class devices,” which are storage-class memories, GPUs, and FPGAs that can live in the 100’s of nanoseconds world and aren’t able to tolerate a protocol stack. If you took storage-class memory and put it at the end of a 10-Gb/s wire, you’ve kind of killed the intrinsic value. That’s why we need to move toward this memory-centric and kinetic infrastructure world where we get to this next level of full composability through the memory domain because you don’t want to put these devices behind a protocol stack.
Kinetic infrastructure is the best way I’ve found to define true composability. It delivers the flexibility of configuration from storage devices to memory centric devices to acceleration devices. Additionally, it enables the ability to assign the right resources for the right workload and to change dynamically as business needs change.
So, are we there yet? Not quite. But we’re dedicated to moving this toward reality with collaboration across the industry via open standards. We need to disaggregate technologies using a memory-centric architecture and a memory semantic fabric. The industry is collaborating on Gen-Z, a consortium for open systems interconnect, for this exact reason. Gen-Z has made great strides in just the past year.
In addition to hardware, the software evolution also is in progress. Full composability requires that third parties be involved with open APIs — which is why the industry, through computer software trade group Distributed Management Task Force (DMTF), just added composability support to Redfish, a standard management API for software-defined data centers (SDDC). Storage Networking Industry Association (SNIA) Swordfish has emerged as an extension of the DMTF Redfish specification, so the same easy-to-use RESTful interface is used to seamlessly manage storage equipment and storage services.
What’s a Possible Scenario With Full Composability?
Here’s something Dell EMC been able to do in our own lab. Imagine a Kubernetes cluster with several of the composable technologies described above.
- We could recognize a Kubernetes job that has been placed on the scheduler’s pending work queue due to a lack of hardware resources.
- We could determine why the job is pending, i.e. that fact that it is lacking resources and what resources are missing from nodes for the job to be able to run.
- We could select a Kubernetes node and reconfigure the node to meet the runtime requirements of the pending job thru dynamic resource assignment.
- We could trigger Kubernetes to now run the job and then free the resources back to the pool.
This will enable certain classes of jobs to be run in the microservices environment that may not have been practical previously. This work helps prove the underlying theory and value of full composability.
The industry is well on its way to making an enormous shift, arriving at a true kinetic existence for IT. As we continue to add intelligence to IT — to make it simpler and more cost effective for all — the end result will offer one giant step for IT optimization and utilization.