Over the last several years enterprise customers have evolved to leverage the power of public cloud infrastructure by running their applications on well-established cloud service providers like Amazon Web Services and Google. Though this approach has brought in cost advantages in some situations, organizations have rarely ventured out in public space with their mission-critical applications due to security and regulatory concerns. Organizations have loved the cloud way of developing applications where high availability and scaling ability are built into the application as opposed to the infrastructure, but building cloud infrastructure in a private environment for running critical applications and storing important data has proved to be quite challenging.
Interestingly enough, all the public cloud infrastructures are based on in-house technologies that are not available for direct private consumption. There have been multiple open standard initiatives like Openstack, Cloudstack, and Eucalyptus in recent years, each getting variable traction in private-enterprise environments. This is mainly because it is quite a complicated ordeal to make open-source/open-standard orchestration systems work on any arbitrary set of server, storage, and network hardware. Tracking and managing the issues that are common in open-source software packages, along with handling the server and network management aspects, has proven to be a very difficult proposition.
Enterprises have also clearly looked for a hybrid model where they are in a position to use the public and private cloud selectively based on the performance needs and business requirement of their applications. Though widely talked about, hybrid operations have eluded people, due to the complicated nature of compatibility and API integration.
Customers should be choosing a cloud solution based on your application and business needs without being bogged down by the technological restrictions and implementation complexities. Wouldn’t it be ideal to have a solution that allows users to quickly deploy an elastic cloud where they can freely assign applications to either public or private cloud and enjoy the benefit of both? Here are the three features you should be looking for when evaluating a cloud solution for your enterprise.
The first feature is API compatibility between private cloud and public cloud. It is obvious that AWS is not going to change its APIs to accommodate private cloud implementations, so the most straightforward way to achieve API compatibility is to include public-cloud-like API support in private cloud implementation. Let’s take OpenStack as an example. When OpenStack was first released, there was no native API, only EC2 APIs, and it later added RackSpace support. One of the legends in the cloud industry, Cloudscaling founder and acting CEO Randy Bias is a big advocate of keeping OpenStack APIs consistent with public cloud APIs, as can be seen in this blog. The scale and flexibility of public clouds provides customers great cost benefit to be able to very easily offload excessive non-sensitive workload to AWS, Google Compute Engine, and the like.
The second feature is a strong network orchestration and automation component that can build an open and dynamic network for the cloud. Server virtualization has revolutionized the compute environments where applications run by providing much higher resource utilization, certain level of workload mobility, orchestration and automation. But the underlying network has lagged behind. Most of data centers are still using VLANs to provision the network to support multi-tenancy. It is cumbersome to move workload around in large data centers or across multiple data centers, and VLAN provisioning is still largely manual and VLAN convergence introduces latency. The networking industry is looking to SDN to virtualize, orchestrate and automate the network so that the orchestrator can holistically provisioning all resources including the network for application needs. The SDN controller that will truly take off and catch fire must be one designed with a very open architecture: open northbound API for programmability of network from cloud orchestrator, the ability to communicate with physical and virtual gateways and other controllers, and the ability to run on any physical network with IP connectivity. With network automation, spinning up multi-tier applications and get the tiers properly connected, connecting virtual machines and instances to the public Internet securely, and a wide range of other functionalities, can all be done at your fingertip.
Last but not the least, an easy, flexible and functional way to manage distributed block and object storage. Cloud is also bringing revolution to storage. The new elastic scale-out architecture means smaller failure domains and better risk mitigation. In addition, and the responsibility of managing complexities has shifted to applications to accommodate geo-distributions and auto-scaling. Because server and storage hardware occasionally require maintenance or repair, it’s important to store the same information on multiple machines for redundancy. Without redundancy, a cloud storage system couldn’t ensure clients that they could access their information at any given time. Cloud storage needs to be able to handle backups across storage devices or even across data centers in different geographical regions, and restore operations from backup snapshots with very low latency.