Analysts are not employed by SDxCentral and the views, thoughts, and opinions expressed in their content belong solely to the author and do not reflect the views of SDxCentral. Note: AvidThink is a separate organization, created by Roy Chua, that is not affiliated with SDxCentral.
This article is underwritten by VMware. The underwriter of this article helps fund its creation but it has no control over the specific content of the article.
Thanks to the rise of network virtualization it is possible to flatten the networking architectures that span all these environments. By combining network virtualization with emerging cloud native platforms such as Kubernetes, IT teams are able to deploy a common IT framework across all the platforms that make up an extended enterprise.
For example, an instance of Kubernetes integrated with VMware’s NSX virtualization software in the form of a Pivotal Container Service (PKS) can be deployed on public clouds such as Amazon Web Services (AWS) as well as on instances of VMware running in a local data center or the network edge. In addition to providing the network overlay and the Kubernetes compute abstraction layer required to unify the management of those disparate environments, NSX integrated within PKS provides the means to prioritize and secure network traffic by applying policies to zones of resources. This prioritization and security is enabled by microsegmention created and managed using the virtual network overlay. VMware developed PKS along with its sister company Pivotal Software.
The latest version of PKS includes additional load balancing and Harbor, a cloud native implementation of an enterprise-class container registry for distributing container images optimized for PKS. The upgrades make it simpler to set up and manage disparate environments. These capabilities complement an existing set of open source BOSH tools developed by Pivotal that automate the deployment of distributed infrastructure.
As it becomes simpler to operationalize cloud native applications based on containers and Kubernetes the artificial lines that isolate various computing environments will start to dissipate.
“Where an application workload runs will become less relevant over time,” says Wendy Cartee, senior director for cloud native advocacy for VMware.
However, the size of the IT staff is likely to remain constant even as application workloads become more distributed. This means IT organizations will increasingly need to rely on automation to deploy and manage applications, says Cartee.
Many virtual infrastructure providers are tackling this need for automation and the ability to extend a common framework across platforms. Mirantis, for example, enables IT organizations to operationalize Kubernetes within the context of an OpenStack environment.
n the near term, the biggest challenged IT organizations are likely to face is corralling all the different distributions of Kubernetes in their environments. There are over 75 certified distributions of Kubernetes today and that number is likely to increase. IT organizations may soon find themselves trying to manage different distributions of Kubernetes running on multiple clouds and various classes of IT infrastructure running in a local data center or at the network edge.
A recent survey published by 451 Research found that 19 percent of organizations are currently employing containers and another 26 percent are planning to move in that direction in the coming year. The thing about containers is once an IT organization deploys roughly 100 containers in a production environment, it quickly becomes apparent a container orchestration engine such as Kubernetes is critical.
As Kubernetes clusters begin to sprawl across the enterprise, developers begin asking internal IT operations teams to take over management of them. Savvy IT teams will want to get in front of any transition to Kubernetes. After all, developers are not especially particular about which version of Kubernetes they employ. For them, the choice of platform generally comes down to whatever has the least amount of friction. The challenge facing IT operations teams is to make sure the simplest and easiest distribution of Kubernetes is the one employed.