When it comes to an enterprise setting out its cloud migration strategy, it will almost certainly have to embrace the three options of re-hosting, re-platforming, and refactoring. This is due to every application being at a different point in its lifecycle, budgetary concerns, meeting changing business needs, and digital transformation, among other things.



This is the opportunity to reimagine the application’s architecture, code and scope by leveraging cloud’s native technologies like Kubernetes and Docker



Modify specific components of the application to run in the cloud without changing the core architecture. This is sometimes called lift and shape or lift, tinker and shift



Lift and shift entire applications as they are, from their legacy runtime environment to the cloud

The motivations for each are different, although they have some commonality, and choosing the right path for each application and its associated data requires detailed analysis and planning. The expectation of re-hosting an application is that it is cheaper to run compared with the cost of hosting and operations in a data center, from cooling to staff managing it. Some legacy applications might be of low value but are still essential to keep a business running after successfully migrating to the cloud.

The simplest route

Re-hosting is the simplest cloud migration option and is also known as lifting and shifting. This move to the cloud involves creating a virtual machine on the current directory in the cloud, then installing the application along with its associated data. In effect, all that has changed is the endpoint location, and users won’t even know – except that it costs less to run, performs faster and has less downtime.

However, these benefits depend on the migration being done well, ensuring it doesn’t turn into lift, shift, and drift, with costs out of control and the desired benefits unrealized.

At its 2020 IT Symposium, Gartner announced its research had found that 45% of organizations that carry out a lift-and-shift migration will overspend by 70% during the first 18 months of their new architecture, mainly because they over-provisioned by 55%. In other words, if they only had a working roadmap of how to migrate to the cloud, they could have spent 30% of what they spent for 18 months’ benefits.

The middle ground

Re-platforming is a halfway house between re-hosting, where the application is virtualized but intact, and refactoring, in which the application is decomposed and recreated to leverage cloud-native technologies.

With re-platforming, the application has added cloud-native attributes, but its core architecture remains fundamentally the same. It is sometimes referred to as lift, tinker, and shift – tinker means attempting to fix something in a less than ideal way, reflecting the less than ideal outcomes often achieved by poor execution of this approach. Re-platforming, too needs to have the steps of its development and deployment carefully mapped out to succeed –as demonstrated for re-factoring below.

Going cloud-native

Refactoring is about taking full advantage of containerization. That is cloud-native technologies like Docker and Kubernetes to not only do things better, but do them differently, and perhaps add new functionality.

Here are the high-level steps that are required to re-platform each application.

cloub native_re-platform_steps

As is clear from this example of just one part of Step 2 in the re-platforming process how complex the migration is and how thorough the approach needs to be. Every step breaks down into many more incremental ones, and all of them are essential to the success of the migration.

The re-platformed, cloud-native applications are assembled from building blocks provided by public cloud platforms – such as Amazon Web Services (AWS), Google Cloud, and Microsoft Azure. They provide a block for every aspect of all applications, but no single application requires them all.

For example, there are blocks to support analytics (like data factories and lakes), automatic scaling and load balancing (so that service to the end user is not affected by the level of traffic entering the cloud), chatbots, compute storage, databases, microservices and many more.

Assembling, running, maintaining, and upgrading containerized applications can be super-efficient if done expertly, but as we’ve seen, there are many moving parts, and it’s completely new territory for many enterprises. It’s little wonder that the Securing your enterprise in a multi-cloud environment report, published in November 2022, found that 94% of 1,500 respondents said their enterprise is overspending the original budgets by an average of 43% when migrating to cloud.

Integration is foundational to cloud migration

Integration is integral to the success of all three migration approaches, but most especially in refactoring, where it is most prevalent. To add to the challenges, most enterprises will have multi-cloud deployments; a combination of private and public cloud (hybrid) operations as well as private cloud, which they intend to migrate to cloud-native technologies. Note that although cloud-native technologies are mostly associated with public cloud, they can be deployed on private clouds too.

Hence establishing the right integration architecture is also a major critical success factor, and a hybrid integration platform (HIP) is becoming the preferred approach to underpin cloud migrations. As the schematic below shows, HIPs can provide a single point of control, administration, and governance to prevent drift and overprovisioning – see this white paper for more about HIP.

Caption: A reference architecture for HIP

architecture for HIP

A HIP approach to cloud ops

Note that this architecture accommodates legacy integration mechanisms – such as enterprise service bus (ESB) and web services – and modern integration components like containers, microservices, and API gateways. What is not explicit is that the reference architecture also supports what Gartner calls Enterprise Integration Platform as a Service (EIPaaS) for Software as a Service or SaaS applications (see the top box on the left-hand side of the diagram – Cloud (SaaS) Applications).

Organizations are increasingly adopting EIPass to address the huge integration challenge resulting from the explosion of Software as a Service or SaaS applications, which has replicated the age-old problem of siloes that SaaS and cloud were supposed to fix.

The right-hand side of the schematic shows DevOps practices, cutting across many blocks, plus APIs and the digital ecosystem: part of the drive for digitalization is so that organizations can become extended enterprises, buying and selling products, assets and services through an ecosystem that supports new business models.

The reference architecture recognizes that legacy and modernized applications, plus those somewhere in between, and all of their associated data, will run in parallel for the foreseeable future. The key is doing this so that each is as successful as possible in achieving its desired outcomes.

Use all best tools

There are many tools that can help with the execution of the integration architecture, particularly in terms of standardization to aid automation, which is the only viable way of managing integrations as they proliferate. The idea is to standardize, replicate and reuse of interfaces in a consistent manner to enable interoperability, agility, and scalability.

For instance, AutomatonTM is a no-code tool that automates the testing of data interfaces, APIs and other components of user interfaces. This means staff without coding skills can use it and Automaton can also link to external data sources, supporting the extended enterprise.

Deplomatic, an API-first, automation framework can be created and deployed onto any cloud platform in a single click. Tooling, including Ansible, Terraform, and others, can speed up the development of infrastructure as a code.

A micro-gateway:

The DigitMarket™ micro-gateway is deployable as a standalone gateway instance to secure microservices. It has configurable usage policies (throttling, rate limiting, etc.) and security policies ( OAuth, HTTP-Basic, etc. ). A token-based authentication model makes it highly secure.

A deployment automation tool:

Deplomatic is an advanced deployment automation tool. It is a container-based environment provisioning tool, built on Ansible, to provision a sandbox environment, for approval of workflowbased environment creation and continuous deployment. It can be exposed through APIs for easy integration into DevOps based workflows (continuous deployment).

A visual integration tool:

‘Coupler’ is designed to model flows through configuration (drag-anddrop) and expose them on a microgateway. It supports multiple protocols such as SOAP, REST, JDBC, MQTT, JMS, etc. It offers node-based data flow modeling with configurable properties for each node.

A stubbing automation tool:

AutoStub® is an automation tool for API mocking. It has automatic data population capabilities configured for different response parameters. Its intelligent mocking capability helps design stateful data, across API invocations. The tool is used to create DevOps lifecycle automation for microservices by stubbing dependencies for each microservice to enable “testability” of each use case.

An API test automation tool

Automaton™ is a test automation tool to test API flows using a visual approach. It is extensible. It can test beyond APIs. Its API-driven approach helps trigger tests from external scripts. It is used to create DevOps lifecycle automation for microservices through test automation for continuous integration. It can be configured to generate reports for each test execution.

In conclusion

Migrating applications and data to the cloud and deploying cloud-native technologies promises many business and operational benefits – from reduced risk, cost, and downtime to greater operational and business flexibility, bringing advantages such as faster time to market and the ability to react faster to market changes.

However, migrating to cloud is complex and needs careful, step-by-step planning and execution for every single application, regardless of whether it is to be re-hosted, re-platformed, or re-factored, it requires expertise and experience.

Sticking to a budget and achieving the desired outcomes on schedule – or perhaps even exceeding them as more possibilities become clear – are all excellent reasons for partnering with a cloud migration and integration expert.

You can find out more about Torry Harris Integration Solutions’ Legacy to cloud-native kit and services here.

Contact us today to get started.