The cloud has become a crucial element of almost every enterprise. Moving data-rich workloads to infrastructure-as-a-service (IaaS), such as Amazon Web Services (AWS), is a priority of IT organizations globally. Therefore, one of the biggest challenges of IT teams today is building cloud migration into the overall IT business strategy. Not only do IT teams need to choose the right tools—they must make the right plans and decisions to succeed.
Gartner research shows, “Any cloud-migration decision is, in essence, an application or infrastructure modernization decision and needs to be approached in the broader context of related application portfolio management and infrastructure portfolio management programs.”
To optimize for a successful transition between the different IT environments, IT teams need to be aware of the factors that can derail a migration project.
Operational excellence is a key success factor for enterprise IT. However, building operations to support growth and business innovation, while maintaining day-to-day operations can be challenging. Clearly, the public cloud offers compelling agility to support rapid change and growth. But successfully moving existing applications to the cloud requires good process, along with good technology that can bridge the gap between heterogeneous cloud environments.
As enterprise IT departments attempt to match data center supply with growing and often fluctuating demand from users, balancing between a situation of under-utilized resources and that of over-commitment, becomes essential. The cost-effective, pay-as-you-go nature of the public cloud can complement existing private cloud computing resources. Implementing a migration solution as part of a hybrid cloud strategy requires investment, expertise, and commitment to bridge the complexity — particularly for data-rich workloads.
Here are three risks that every enterprise should be aware of during the cloud migration planning phase, with tips to help towards a good solution using the best industry practices.
Managing IT System Complexity
The more intricate the architecture, the higher the chance of failure in migration between different environments. Data-rich applications are seldom stand-alone, are often weaved into and tangled with other elements, and might have multiple dependencies with on-premises environments. Target cloud environments, on the other hand, might be different than the on-premises system. There are vast variations in consoles, management tools, or processes to manage data resiliency (e.g. backup, replication, HA, DR), network drivers, or actual services offered. With such complexity, the key challenge is to identify what will work in production, and eliminate any disharmony when lifting and shifting apps to the cloud.
Tip 1: Consider solutions that can help ‘test before you migrate.’ They can quickly identify and even remedy the differences that exist between environments and reduce the overall complexity of migration. Then, make sure to work with cloud providers that offer services that are relevant to your way of working on-premises. Just being agile and nominally cost-effective isn’t enough to manage the complexity — the net-net is wasting more engineering and operations time than planned, with added frustration of trying to fit “square pegs” into “round holes.”
Unlocking Data Gravity and Security
Dave McCrory, creator of the concept of “data gravity,” wrote “Data gravity describes the effect that as data accumulates, there is a greater likelihood that additional services and applications will be attracted to this data, essentially having the same effect gravity has on objects around a planet. Although services and applications have their own gravity, data is the most massive and dense, meaning it has the most gravity. If data becomes large enough it can become virtually impossible to move.”
Data gravity in cloud migration describes the situation where it is difficult to test whether an application will work effectively in the cloud, because most replication-based migration tools require data to be moved first.
Another fundamental challenge of moving large volumes of data from point A to point B is with transactional production servers that continuously generate or update data, especially when dealing with databases that require consistent views. Once the snapshot of data has been migrated, the system must track and ultimately synchronize new changes to the production application. In addition, management and control of the data streams within the heterogeneous environment holds great challenges. For example, security becomes a concern. Many traditional enterprise IT organizations are still concerned about storing production data in the public cloud due to lack of similar control on multiple disparate data repositories across the different IT environments.
Tip 2: The root cause of this challenge is using a “replication” based approach. Resources need to be replicated and synchronized, which can be a complex and daunting task. The remedy is to avoid replication and to only move the actual resources (i.e., VM, instance). Consider “live migration” approaches and tools that do this by streaming the whole instance and not by creating a replica of the resource. Live migration eliminates complex system synchronization and, hence, also consistency issues. Once the VM is up in the cloud, it should resume ownership for data access, so there is always only one instance accessing the data. In addition, in order to protect your data streams, look for solutions offering direct and secured connectivity (e.g., AWS Direct Connect) in and out of the cloud, and implement those in a highly available configuration.
Preventing Vendor Lock-in
Major public cloud providers such as AWS, Microsoft Azure, and Google Cloud Platform have a key priority in mind: keeping an enterprise on board as a repeat customer through dependency on their specific APIs and services and not just based on merit. Therefore, considering the cloud gravity phenomena, the journey to public cloud can become a one-way trip… For effective data and workload migration that doesn’t slow the business down or introduce prohibitive costs, IT professionals require workload mobility.
Tip 3: Use containers (e.g., Docker) for portability and interoperability for stateless applications in a multi-cloud strategy. For enterprise applications that are data-rich, evaluate migration solutions in the context of speed and simplicity.
After becoming aware of the risks, consider these best practices to incorporate in a good public cloud solution.
- Identify and manage data sensitivity and business agility needs: Treat on-premises and cloud environments as a hierarchy, with on-premises serving as the “base”, and retaining priority over storage and key infrastructure elements, instead of treating both environments as equal.
- Consider “live migration” methods: Instead of replication-based approaches, plan to manage and stream instances. It will simplify processes and eliminate system inconsistencies.
- Manage only one complex architecture—according to a primary system design (usually the existing on-premise one): Design architecture so it interacts with, and consumes, the compute primitive wherever it needs to. Fit the architecture to the data, that is based and stored in the enterprise’s IT environment. Define policies as required for the enterprise. Tapping into the public cloud for compute only, means your dependency on a provider’s specific services is greatly reduced.
Don’t get locked into the cloud vendor’s marketing pitch. On the other hand, don’t build expensive and hard to manage in-house migration capabilities. Figure out your desired architecture, make sure that it works for your enterprise at scale, and then use public cloud resources to help manage demand fluctuations and support migration for key applications, without adding complexity, compromising the data, or locking you in.