IT agility and cost are the two main issues driving the adoption of public clouds. In the past, it was much simpler to use an application programming interface (API) to invoke compute, storage, and networking resources at a nominal cost. Today, however, that debate is far more nuanced.
A big reason for that shift has been the rise of converged and hyperconverged systems that can either be pre-integrated or crafted by combing software-defined compute, storage, and networking within the confines of a software-defined data center. For example, VMware combines its core vSphere virtual machine software with vSAN storage and NSX network virtualization software to create a vCloud that can be deployed on-premises or in the cloud. Alternatively, Dell EMC, a sister company of VMware, uses VMware software to provide pre-integrated systems that can be configured with software from either VMware, Microsoft, or Nutanix.
As these functions become integrated, the total cost of ownership (TCO) for the data center environment not only declines, but the level of agility an IT organization can exploit increases by several orders of magnitude. The decision as to what path to pursue really comes down to whether IT organizations want to acquire hardware and software together or acquire server infrastructure separately from the software that gets deployed on it.
Regardless of the approach, it is clear that local data centers are about to become even more efficient in the wake of the delivery of the Intel Xeon Scalable processors series. Optimized for NVM Express (NVMe)-based systems, these processors are now capable of keeping pace with I/O requests from flash storage. When combined with 25G Ethernet to connect the various nodes within the system, it quickly becomes apparent there’s now plenty of headroom to run software-defined compute, storage, and networking on the same platform. According to Brian Payne, senior product planning manager for the Dell PowerEdge Server Portfolio, systems based on Intel Xeon Scalable processors are really the first platforms built from the ground up for a software-defined world.
As part of the shift to NVMe-based system vendors, across the board they are increasing the level of automation they provide to make these systems easier to manage than their predecessors. The level of compute, storage, and networking density that can be achieved per system is moving beyond the capabilities of individual IT teams, causing organizations to rely more on automation. According to Joan Starke, senior manager for hybrid cloud and machine learning at Cisco, IT organizations cannot simply keep throwing manual labor at the problem.
The good news is that the more companies that embrace IT automation, the lower the costs. As that TCO drops it becomes a lot more cost effective to run application workloads than it does to deploy them in a cloud. After all, monthly charges on a public cloud over multiple years will add up.
That doesn’t mean IT organizations are going to abandon public clouds. But there will be a better understanding of what types of workloads run best in a public cloud vs. on-premises. A recent survey of over 200 IT decision makers in the U.S. conducted by Peak 10 found that 61 percent view hybrid cloud computing as an end goal vs. merely a means to moving every application on to a public cloud.