Analysts are not employed by SDxCentral and the views, thoughts, and opinions expressed in their content belong solely to the author and do not reflect the views of SDxCentral. Note: AvidThink is a separate organization, created by Roy Chua, that is not affiliated with SDxCentral.
This article is underwritten by VMware. The underwriter of this article helps fund its creation but it has no control over the specific content of the article.
Disaster recovery used to be a relatively simple exercise. IT administrators backed data up to a tape drive and hoped it wasn’t corrupted in the event they needed to recover it. Recovery got a lot better when it became cost-effective to employ magnetic disk drives, especially when disk drives were being accessed via the cloud.
But then the unexpected occurred. The amount of data needed to be backed up started to exceed the amount of time available perform a backup. To deal with that issue most IT organizations began to continuously backup blocks of data versus trying to transfer massive files.
Of course, that only addressed one part of the equation. Any time an IT organization wants to pull data back down from the cloud, the egress fees can be exorbitant. Because of that, many IT organizations continue to back up data that was recently accessed under the theory that most users will be looking for a file they used in the last 90 days. The cloud morphs into a large pool of archived data that rarely gets accessed.
While that may address some of the cost issues associated with disaster recovery it only marginally improves recovery time. In fact, a new survey of 500 IT decision makers conducted by StorageCraft, a provider of data protection software, finds that more than half (51 percent) are not confident that their organization’s IT infrastructure can perform instant data recovery in the event of a failure.
Instead of trying to recover data and reconstitute applications when a disaster occurs, it’s now become a lot more practical to keep a standby set of applications running on virtual machines in a cloud service that can access copies of corporate data in an emergency. There may be some small loss of recent data in the event of an emergency. But access to both applications and most data using what is known as a disaster recovery-as-a-service (DRaaS) platform can now be provided in a matter of minutes.
Growth in DRaaS
DRaaS is not a new concept. It has been around in one form or another for years. But the rise of the public cloud has made it more cost effective for organizations to implement. In fact, once VMware and Amazon Web Services (AWS) make the public cloud service they are building generally available, one of the first services that they plan to jointly offer will be a DRaaS offering.
Overall, Gartner estimates that the DRaaS market is worth approximately $2.01 billion today and is expected growth to $3.7 billion through 2021. Ransomware attacks are a big part of that growth rate because only those types of attacks can only be thwarted by having more sophisticated approaches to data recovery in place.
Al Bunte, COO for Commvault, a provider of data protection software and hardware, said that data protection is being transformed by the rise of application programming interfaces (APIs) both inside the enterprise and in the cloud. Instead of managing backups and recovery via a user interface, data protection is being addressed as part of an integrated set of DevOps processes. In many cases, that shift is making developers more accountable for making sure applications are always available and secure and IT organization are more accountable for preventing any downtime.
Naturally, it may take a while for that transition to play out. But it’s already apparent that data protection is quickly a much more critical task to perform.