CI and HCI systems are built on a foundation of hardware virtualization and software-defined infrastructure, a combination that has changed how developers design applications and operations teams deploy them. Gone are the days of massive, monolithic applications running on a single mainframe, which have been replaced by highly distributed and inherently resilient systems where an application might span dozens of machines. However, the fundamental truth that programs equal algorithms plus data structures hasn’t changed. Applications still need fast access to data, even when they’re spread across distributed systems, which demands a tight coupling between compute, networking and storage subsystems to deliver adequate application performance. Regardless of advances in network technology, a SAN generally can’t match the performance of locally-attached disks connected to a processor complex over PCIe.
The move to distributed systems is evidence of the inherent difficulty of scaling performance of monolithic designs. Processor designers realized this decades ago when they hit a brick wall trying to make single-core designs go faster and faster. Instead, CPUs scaled out, adding cores and improving their instruction execution efficiency while keeping the clock rate the same. Taking a page from today’s 20+-core processors, infrastructure designers have learned that it’s much easier to scale capacity and performance using connected, yet distributed building blocks versus large, monolithic systems.
Storage is the area where the performance advantages of many distributed nodes have become particularly compelling. Monolithic storage arrays shared among many servers are inherently bottlenecked at the disk controllers and storage network interfaces that collectively throttle their I/O throughput. For example, one set of tests found that a transactional database application based on TPC-C would saturate the CPU capacity of even a 24-core host before hitting the inherent I/O limits of just two SSDs. In contrast, a converged system can scale IOPS almost linearly as nodes are added to a database cluster. An important caveat is that the tests were performed by a vendor of HCI systems trying to make a point; however, its methodology was sound and numerous independent tests have long shown similar results. Indeed, even the largest purveyor of conventional SAN storage systems in the world touts the IOPS performance and scalability of its scale out, converged products.
As developers and DevOps teams embrace cloud-native distributed designs, converged systems are emerging as the infrastructure for next-generation applications. Application architects are becoming fluent in designing around PaaS platforms and an exploding variety of IaaS services. At the same time, these architects are also enamored of the convenience and speed with which public cloud services can spin up resources. However, shared cloud services aren’t a good fit for some organizations, whether for reasons of cultural mismatch, regulatory controls, sizeable legacy infrastructure or proprietary applications and data that are difficult to move. These organizations would much prefer to have the benefits of cloud agility, resource flexibility and API-enabled programmability on a private platform.
Converged systems, particularly modular, scale-out HCI, is evolving into the foundation for private clouds. From roots as systems for conventional VM workloads, the software powering HCI like Microsoft Azure Stack, Nutanix Enterprise Cloud Platform, Pivotal Cloud Foundry or VMware vCloud Suite is becoming the building blocks for an internally-operated IaaS and PaaS. However, few organizations want the private-versus-public cloud decision to be all or nothing, which is why these same software stacks are working with (or, in the case of Azure, produced by the same company) the major public cloud services to enable hybrid environments in which workloads can be easily migrated between private and public infrastructure.
Additional Technical Case for Converged Platforms: CI and HCI Resources