Cloud Repatriation is picking up speed

Massive data growth, rising costs for cloud services, and the need for more flexibility have given hosting data and workloads on-premises new momentum, as Eric Bassier, senior director of products at Quantum, explains.

The benefits of cloud computing, in general, are undisputed. Cloud usage has grown rapidly over the past decade and particularly in the last three years, as the need to modernise IT quickly to enable remote working drove many organisations to the cloud. While some workloads are well served in the cloud, many organisations are now finding that some of their data and workloads are better off located in the organisation’s own data centre.

Accordingly, most organisations today deploy a hybrid or multi-cloud approach. Given this, cloud repatriation has become an increasingly popular trend over the last few years. Organisations are moving workloads and data back to their data centres. Why are we seeing this trend?

The cost of cloud computing is rising.

While the public cloud can be cost-effective for specific use cases, it has, in general ,led to higher costs for organisations that moved data and workloads to the public cloud without considering the best location for that data. There are numerous good reasons why cloud services have been becoming more expensive. Rising costs generally, higher demand or growing complexity, are all understandable reasons. However, one of the biggest reasons why cloud budgets have been ballooning out of proportion is that charges for egress or other service fees have made the costs for public cloud storage highly unpredictable. In the end, many organisations paid much more for their cloud services than they expected or budgeted for. To lower the costs and make it more predictable, organisations have been looking for other options.

Organisations need more cloud flexibility.

In a perfect cloud world, customers could easily pick and mix their ideal setup and flexibly move their data and workloads between the parts of their chosen multi-cloud ecosystem. However, this is anything but easy. For one, public cloud vendors have successfully locked-in customers and their data onto their cloud platforms. The providers’ price lists are structured in a way that it is cheap to upload data to the cloud, but it is incomparably more expensive to download it again. As the amount of data is increasing, especially around ever larger files of unstructured data, storing and processing them in the cloud has become very expensive.

On top of this, “Data Gravity” forces organisations to keep their data and workloads close. It’s also increasingly clear that some of an organisation’s data should reside in the data centre and not be sent to the cloud, given the concerns about cloud storage implementation, data sovereignty and security. To solve these problems and improve their cloud setup, organisations are comparing all options of where their data and workloads might be ideally located. Inevitably, they’ve concluded that some of their cloud-workloads should come “back home” on-premises as, in doing so, it promises a higher ROI.

Complete cloud repatriation makes little sense.

Several factors influence the decision to move back from the cloud to on-premises. At the storage level, the advent of object storage on relatively inexpensive tape offers companies a tantalising business case for moving appropriate data back to their own hardware. The tape renaissance comes at a time when the storage space is undergoing a seismic shift from disk storage to a storage strategy with two main tiers: a fast flash tier for high-performance workloads and a tape tier for low-cost bulk storage.

However, full repatriation of data and workloads from the public cloud makes little sense for most organisations, just as a cloud-only strategy is neither economically nor practically ideal. In the past, enterprises have made a conscious decision to incur the additional costs of running applications in public clouds to maintain flexibility and scalability. But as organisations are now actively looking for ways to improve their data portability, they have noticed that they need more flexibility on both the hardware and software levels. Only with a flexible overall system they will be able to find the optimal infrastructure-mix, including repatriating data and workloads back to on-premises hardware, resulting in a net gain in ROI.

Conclusion: Flexibility in a “cloud-also” world

To achieve a higher ROI, enterprises need more flexibility in the data centre. This allows them to build hybrid- and multi-cloud environments that allow them to place their workloads and data where they are best suited. For some data and workloads, that means repatriating them from the public cloud back into the data centre, where they can be stored much cheaper. The availability of NVMe as a fast flash tier for high-performance workloads adds another layer of flexibility for workloads that need higher performance. And for special use cases, like PaaS-Solutions, a hyperscaler’s platform might be suited best.

To reap the benefits of cloud-computing, organisations today need the highest amount of flexibility to use whatever infrastructure works best for every use case. Modern technologies like software-defined data management solutions help organisations to achieve that flexibility, whilst the emergence of fast NVMe and cheap tape storage give them options to improve the mix of ideal locations to place their data and workloads. So it is not a question of “cloud-only” vs “on-premises only” but a combination of both – and the flexibility to change, shift, move and migrate data and workloads whenever needed.

Tags: on-premise, Public Cloud, Storage

Source