There is a changing of the guard in today’s technology organizations where we see control over new application deployment is largely being shifted away from traditional IT organizations. The application developers prefer to go straight to the public cloud, going around IT for immediate, direct access to infrastructure resources and developer-friendly platform services. This might lead one to suspect that development on-premises is all but dead – after all, if the bulk of the new applications go on a public cloud, where does that leave on-premises IT in the long run?
Even as the scrappy developer teams build their new apps on Amazon Web Services, Google Cloud or Azure, many existing on-premises systems are relentlessly holding on. There are often a number of legacy applications and a substantial amount of data they’ve produced over the years that continue to be important to the business and aren’t going away any time soon. And, it doesn’t always make sense to move these legacy applications to a public cloud, for a variety of reasons.
On-premises is very much alive.
For example, organizations will often have legacy applications on-premises and new applications in the public cloud. In some cases, there is a need for these applications to interact, communicate, and/or share data. Now, these two fundamentally distinct architectures need to act as one or work more closely together than in they did in the past. There’s any number of reasons this situation tends to arise, perhaps as a result of a big re-org or corporate M&A, and we’ve come to call this quandary “hybrid cloud.” How we go about solving the challenges associated with it is an ongoing engineering effort in the industry. But is it relegated to running legacy workloads? What about new application development?
As born in the cloud applications mature, some companies are seeing that perhaps it is cheaper and easier to move on-premises, after all. So, “lift and shift” programs are now moving applications and data “the other direction” as well – going from public cloud to private cloud, at least in part. Going back a few years, who’d have thought… yet here we are.
Scenarios like this are driving hybrid cloud and private cloud investment.
It turns out on-premises isn’t dead at all – yet nobody wants to adopt yesterday’s operating models to build tomorrow’s applications. Self-service, DevOps, Cloud Native… to facilitate all these cloud-inspired trends, modern cloud infrastructure paradigms are needed on-premises, too.
The major public cloud vendors are starting to recognize this as they try to address hybrid cloud infrastructure with their own on-premises architectures. However, there are implications to plopping some of these “cloud in a rack” systems down on-premises. It may look like a quick and easy way to get true cloud capabilities on-premises, but it also creates a “walled garden” that keeps an application confined within one vendor’s overall ecosystem. It also keeps the application architecture focused on one set of proprietary services. If you think about it, technologies related to containers were born to make workloads portable and agnostic. “Cloud Native” does not imply off-premises, or at least it isn’t supposed to. Infrastructure as a Service and modern DevOps should work on-premises just as well as off, and without a vendor lock-in.
Today’s enterprise needs forward-looking technologies that enable clouds to be built on-premises in an “open” way, without locking their IT departments into any single vendor ecosystem. By focusing on the right options and tools to build a modern cloud – a private cloud with hybrid cloud capabilities, and cloud-native tools that facilitate heterogeneity – IT leaders will future-proof their IT infrastructure built on real enterprise-grade, performant, and scalable technologies, offering greater business agility and operational efficiency.