The cloud provides companies with access to storage on-demand, cost-saving opportunities and flexibility. When the ability to rent operating systems and associated hardware was first introduced, it sent shock waves through the vendor community, which feared the buying power of new cloud giants eroding their margins. They reacted by providing more generous leasing options for their equipment. Now, public cloud providers excel at creating standardized virtualized instances. But, in some cases, this isn’t always ideal for the consumer who needs a variety of custom options.
In the past, a small, young company with little credit would have to purchase its gear with cash. This led to high setup costs. New leasing opportunities now reduce previous barriers to hosting applications oneself as part of a private cloud.
The question that now faces companies migrating to the cloud is, “Which option is the best for my business – the public or private cloud?”
Public cloud providers are certainly cheaper initially, but that advantage erodes quickly as their incremental cost rises faster than the step-function addition of new leased hardware. The small, medium, large, and extra-large T-shirt-sizing public cloud models are not always suitable, and can lead to waste. This is especially noticeable when resource needs don’t match pre-defined relative proportions or bandwidth, RAM, compute and storage needs.
It may come as a surprise that the public cloud can lose its advantage after the deployment of as few as 130 CPU vCores. A recent whitepaper shows that when compared with Amazon EC2 m3.2xlarge instances*, the 36-month cost of an equivalent Open Stack private cloud** and hosting 320 of your own vCores in traditional colocation is actually 23 percent cheaper than EC2. In purpose-built high-density colocation, it is 44 percent cheaper. This equates to more than $380,000 in lifetime savings with private clouds.
While there are many other factors to consider in determining the right mix of public and private cloud, it is worthwhile to ask from time to time, “How would things change if some of my IT costs suddenly became at least 20 percent cheaper?”
It is important for teams to understand where the public and private cloud break-even number lies for their particular circumstances so they can plan to adapt their operations accordingly. Typically, this would be a hybrid cloud solution where the steady state needs are hosted in-house with OpenStack and the “bursty” loads are located in the public cloud.
So, why does high-density colocation make such a positive cost impact on hosting private clouds? You can answer that by re-evaluating your assumption that the local data center is stuck in the 1990s. The days of having rows of server cabinets that support less and less modern equipment is beginning to change. Legacy technologies allowed for five kilowatts (kW) of power per cabinet, which now often only support quarter-full cabinets of modern equipment. Data centers with leading-edge designs can support 20kW and more, wall-to-wall. This allows companies to consolidate the gear of four cabinets into one, reducing the cost of data-center space, cabling and network gear. Modern designs also allow companies to consolidate one cabinet at a time versus using hot aisle containment requiring multiple cabinets. The incremental cost is therefore lower.
High-density colocation also changes the cost equation of the data center. It allows software engineers to create the cloud strategy they want, without settling for only what is publicly available. The non-standard use cases of high-performance compute clouds for bioinformatics, geological analysis, ad serving, malware correlation, CDNs, externally facing SaaS, video encoding, mobile app analysis, and other applications don’t always fit the public cloud offerings.
In summary, well-planned OpenStack deployments in outsourced or in-house data centers can provide great relief without the public cloud compromise of higher variable cost, lower fixed cost and lower average performance per dollar. The tools are available to create a financial model to evaluate the two strategies. The main challenge for IT teams is to determine when, and not if, OpenStack becomes the preferred solution.
* The EC2 m3.2xlarge instance was selected as it is the largest general-purpose instance on offer. Its pricing is proportional to that of smaller general-purpose instances, and therefore didn’t affect the financial analysis.
** The study used systems with equivalent vCores, RAM and disk of the m3.2xlarge instance.