Your legacy assets, enriched over time, are important and valuable – they are what makes your company unique. They contain years of evolved business rules and transactions. Legacy isn’t about the age of assets, but describes when they become increasingly difficult, time-consuming and expensive to support, or have capabilities that cannot be easily extended because they weren’t designed that way.

At this point they are no longer fully supporting your business’ needs and are in danger of being a drain on resources and stifling innovation.

Technical debt

In the US, the government’s Internal Revenue Service (IRS) runs on computer code developed in 1959: In 2018, only 20% of its IT budget was spent on development, modernization, and enhancement. The rest went on running and maintaining legacy systems.

Stéphane Richard, CEO and Chair of the Orange global telecoms group, warned the sector that it must modernize its legacy assets: “To stay relevant and keep growing, operators have to capture part of the new value they are helping to create thanks to their networks. But obviously, developing new digital services needs considerable IT agility, and to seize all the opportunities we need to rethink the way we do IT.”

He said unless the modernization of legacy assets speeded up, “by 2025, technical debts will consume more than 40% of operators’ current IT budgets. Carriers often struggle to balance quick-fix changes to meet short-term business needs, which only increase their financial debt, their technical debt.”

More urgent

The need to modernize legacy systems and applications has become more urgent as customers increasingly expect to engage with companies digitally – an expectation boosted by the on-going pandemic. It’s another way of seeing customers want you to be easier to do business with, and in a recent IDG survey, 67% of respondents said they believed business transformation efforts cannot proceed effectively without IT modernization.

A new McKinsey Global Survey of executives found that companies have accelerated the digitization of their customer and supply-chain interactions and of their internal operations by three to four years. The share of digital or digitally enabled products in their portfolios has accelerated by a staggering seven years.

As Stefan Van Der Zijden, VP Analyst at Gartner, stated, “For many organizations, legacy systems are seen as holding back the business initiatives and business processes that rely on them. When a tipping point is reached, application leaders must look to application modernization to help remove the obstacles.”

Migration strategies

No two companies will follow identical paths to modernize legacy systems which, as we’ve noted, are unique and their major differentiators. There are a number of possible approaches, depending on the systems themselves and their purpose, the desired business and operational outcomes, cloud-migration strategy and organizational priorities.

Typically modernizing legacy systems means they need to work alongside cloud-native solutions. Cloud-native describes an environment that is scalable and usually containerized, and supports quick changes and payment based on usage. The aim is to bring scale and efficiency to legacy assets while enabling enterprises to use standard cloud-native components across their operations and reduce OpEx.

The advantages of the legacy modernization to cloud native are achieved by making legacy assets available to simulate and work seamlessly with scalable third-party products, or even full stacks – three possible variations of how legacy could work alongside a cloud-native environment are shown below.

Stephen Reidy, CIO of Three Ireland, has extensive experience of modernizing legacy systems and applications. After the merger of O2 Ireland and Three Ireland in 2015, the company was running two technology stacks. This was expensive and slowed time to market because changes had to be implemented on both, and was a barrier to Three Ireland providing a consistent digital omnichannel experience to the combined customer base.

He says, “The key to a seamless customer transition is having a really sharp migration strategy for the old stack into the new stack”.

Yet creating a migration strategy is a big challenge for most companies which typically lack the in-house resources and knowledge. As the point of the exercise is to extract the maximum value from some of a company’s biggest assets, it makes sense to use legacy to cloud-native transformation services from a specialist firm.

There are two huge issues to consider before you get started. The first is that moving to cloud native environment that uses microservices is not just about technology, and the costs are significant – a new complex application and network infrastructure is a substantial investment. There is also cost associated with disrupting existing culture and practices, and introducing diversity in platform technology, such as languages, runtime, design styles, among other things. Then there is the expense of coordination across domain teams to deliver enterprise-level benefits.

One of the drivers for moving to cloud native is reduced OpEx, but this can be hard to quantify, and it is easy for inexperienced IT teams to incur spiralling costs, as we explore in some detail in the section on microservices below. Reducing costs is also only part of the picture: Three Ireland reports these benefits by modernizing legacy to cloud native:

  • Consolidation of 300 systems, 16 catalogs and more than 50 third-party partners
  • 30% increase in self-service adoption and automated customer interactions
  • 30% reduction in calls to contact centers
  • Improvements in digital NPS, customer churn, and time to market for new products and services
  • 360° customer view across multiple channels.

Get the best possible help

Reidy’s advice about picking the right legacy modernization partner is, “They have to buy into your [transformation] vision and work collaboratively with you as a business and with your IT department. That’s very key.” Three Ireland chose Torry Harris Integration Solutions (THIS), which Reidy describes as being “a valuable friend and partner to me over many years.

“They bring to the table both technology and a workforce that truly works with customers’ objectives and values. They deliver a technology solution that is flexible and adaptable.”

THIS helps IT leaders analyze, extract and modernize functionality from their legacy estate in a consistent manner for a standardized, accelerated move to a cloud-native setup. A key element of its offer is a legacy to cloud-native kit, which is based on years of experience. Here’s an example of how the suite of tools it provides work together.

Coupler, a visual editor, connects, transforms and combines data points. It provides adapters for common protocols and data formats. To connect to legacy system, like an AS400, use the message queue and secure file transfer protocol (SFTP) adapters.

Having defined the flow in Coupler and an endpoint, securely expose the APIs through the API gateway – part of THIS’ API Management Suite which includes a developer portal, publisher portal, and an authentication server. The gateway can be central or deployed in multiple instances of lightweight micro-gateways.

AUTO stub mocks a third-party system. Coupler uses this mock as one of the steps to create the customer accounts, then AUTO stub automatically generates mock APIs based on open API specifications.

Once connected to the legacy system, the legacy application processes requests from message queues. The requested data is received from Coupler through the micro-gateway, and the messages screen lists received requests, including the content of each message. Coupler transformed JSON data into a space delimited string buffer required by the legacy application.

Automaton, a visual test automation tool helps define tests in a step-by-step flow that can be configured in line with previous activities. This helps visualise and validate these activities where test results can be reviewed and, in this case, to show the successful customer account creation.

This is a deployment automation tool that helps configure applications and application versions onto multiple environments. With provision for configuring custom workflows and multiple product architecture support, Deplomatic helps automate deployment of all the above tools to orchestrate this and multiple other use cases across test and production environments.

Decomposing to release value

The legacy to cloud-native kits also includes frameworks and processes, such as for the decomposition of legacy assets. This enables an in-depth analysis of the legacy type and provides decision points that must be considered to navigate the transition. They could be monolithic systems that have evolved over many years or third-party systems that are no longer supported or full-stack solutions that do not expose connectors and therefore will need APIs to function in a cloud-native environment.

The legacy migration process flow includes a pre-migration checklist, drawn up by THIS after conducting due diligence on your legacy estate, which includes processes and dependencies within customer journeys on legacy, data model, security aspects, physical architecture, regulatory rules, training manuals and more.

Then THIS aligns the checklist with your business priorities, which include inventory of new digital business models, envisaged customer journeys, digital channel features for the new models, security and up to date regulatory policies, scaling needs, and data redundancy for the new cloud-native set-up.

Due diligence

Due diligence phase ensures the availability of the pre-migration checklist items, so we ensure readiness to start the analysis work.

Impact assessment

Keeping the target state as a reference, this step is performed to assess customer journey compatibility, data compatibility, User Interface compatibility, External Interfaces compatibility and Data Integration compatibility. The impact should be documented and will involve different teams within the Telco to be consulted over a series of deep-dive workshop sessions.

Migration and rollback solution design

Based on the impact analysis, the technical solution is created and documented. This contains information such as criteria for selection of customer data subset, sequence, user journeys, etc; and the logic for migration in the form of flow charts and sequence diagrams. Sanity testing approach and rollback approach is identified and documented. The scope extends to User Interface channels, Integration layer and Source/Target systems.

Migration sandbox environment setup

One of the challenges faced by most enterprises is lack of “playground” area with production-like data to develop and test any migration solution. In most cases, the combination and variety of data is not available in development and test environments of legacy. This step is to build a sandbox environment along with data setup. In alignment with your IT Governance and regulatory restrictions, data from production can be loaded in the sandbox after de-sensitization and anonymization. Similarly, a sandbox on the target Cloud system should be made available.

Migration prototyping

Once the data in sandboxes in source and target systems are ready, key assumptions and high-risk scenarios are validated by creating a series of prototypes. Based on the result of the prototype, the design/approach/steps/etc are updated. Prototyping phase should include validating rollback strategies as well.

Migration scripts development and preliminary testing and dry-run

Based on the results of the prototype, migration scripts are developed and tested iteratively in the sandbox environment. This includes simultaneous changes to the channels to create the environment as close as possible to the target Blue-Green deployment strategy. The rollback strategy should extend to the channels as well. Once the migration / parallel-run is simulated in the Sandbox environment, a dry run is conducted in the production environment with limited scope to detect any surprises.

Failure simulation and rollback dry-run

Based on the dry-run, failure scenarios are identified and addressed in the migration scripts, Channels and Integration flows. Errors are simulated and rollback is tested in the Sandbox. Once working in the sandbox, a dry run of rollback is conducted on Production.

Iterative migration execution

Based on the previous steps, the migration solution is industrialized and integrated in the production environment to kick start the phase-by-phase migration. Any further surprises (should be minimal at this stage) are addressed and fixed in the migration solution.

Decommissioning

After an extended period of parallel run, and after migration of all user data, processes, etc.. The legacy stack is decommissioned. The integration layer is simplified and all calls to legacy are removed. The new Cloud system functions as the master. The channel specific routing changes are retained to support Blue/Green deployment strategy for future enhancements.

Muddles with microservices

Microservice deployment are a good example of why an expert guiding hand is invaluable in legacy migration to cloud native. A microservice is a small, autonomous, self-contained software component, loosely coupled with and built around a business domain. It centralizes functionalities, hides implementation details, deploys independently, isolates failure and is easy to monitor.

Large applications can have modular architectures built on microservices that communicate with each other via APIs. By splitting an application into smaller services and decoupling interdependencies, companies can gain greater agility and flexibility, particularly as the architectural style lends itself to automating build-deploy-release cycles for continuous deployment and the DevOps model.

However, if deployed poorly, these advantages are lost or only partly realized. A common issue, as with all relatively new technologies, is that microservices are sometimes deployed for technology’s sake, rather than to achieve well defined outcomes. They can also become too tool-centric, and companies often waste money creating exclusive infrastructure for each microservice or reinventing the wheel for cross-functional capabilities.

High costs are a potential danger because each microservice is independently scalable, each service requires its own infrastructure. As microservices have a lot of moving parts, far more than monolithic legacy systems, the costs of providing each of them with its own infrastructure soon racks up.

For example, sometimes enterprises want to implement microservice architecture using their old and trusted relational database, but problems soon become apparent if several microservices are updating a single relational database.

Also services need to communicate with each other. Depending on the service design, this can mean a lot of remote calls, which have associated, high costs in terms of network latency and processing.

Governance maximizes microservices

Without governing guidelines to regulate cost, microservices expenditure can quickly skyrocket. Microservices governance is also key to gaining greater advantages from modernizing legacy.

Key considerations are:

  • What do I want to govern, and at what level – as Mark Newman, Chief Analyst, TM Forum noted, network operators are trying to figure out whether the microservices they create for a specific application will be resuable for others, and by extension, what governance programs they need to put in place. Some are deciding to build governance at a higher level, for groups of microservices, rather than individual ones – microservices might be refactored within one component.
  • Standards that ensure that all teams address concerns about cross-cutting capabilities in a uniform way to avoid multiple teams taking different approaches, resulting in a bigger mess.
  • It is crucial to establish best practice policies along with clear communication channels where accountability and effectiveness can be measured.
  • The best policies enshrine decentralized governance for tools used for each microservice: They are best determined by the teams and their skillsets.
  • Establishing a DevOps practice before transitioning to microservices is a great approach that helps in determining communication strategies in advance.

Governance is a whole science in itself, and is another reason to get the most proven, experienced help you can for your legacy system modernization and migration.

Related Posts

Whitepaper

Whitepaper - Why Governance is the key to ROI in Digital Transformation
It turns out that a major common contributor of almost all transformation flops is poor governance. Digital transformation involves every aspect of an organization and everyone in it. Structured governance is essential to ensure that everyone takes the same approach to common goals, in a coordinated, timely way.
Whitepaper - 7 Steps to Successful RPA Implementation
Intelligent Automation is boosted by the growing demand to digitize and automate business processes at a time when the Covid-19 pandemic requires rapid workplace transformation.
5G future: Business models for monetization
Communications service providers (CSPs) worldwide are under considerable competitive and financial pressure. This necessitates discussions about future business models, and for many operators talk quickly turns to the potential role for 5G in tapping new sources of revenue.

Analyst Speak

September 22, 2020
Torry Harris in Gartner Critical Capabilities for Full Life Cycle API Management
Torry Harris’ API management solution DigitMarket™ API Manager has been cited by Gartner in its report “Critical Capabilities for Full Life Cycle API Management.”
August 04, 2020
Torry Harris Integration Solutions is a Strong Performer in The Forrester Wave™ for API Management Solutions
Torry Harris has been named a 'Strong Performer' in The Forrester Wave™: API Management Solutions, Q3 2020. According to the report, “API Management customers should look for providers that
November 29, 2019
Identify New Revenue Opportunities in the Open-Banking Ecosystem
IDC finds that European banks are seeking opportunities for data monetization, but only 20% are able to generate revenue from their data. Commercializing open APIs to build a banking ecosystem is still a challenge for many banks. Read this ‘IDC Technology Spotlight’ report to get the “how-to” of building a broader open-banking ecosystem.

Past Webinars

On-demand webinar
Empowering your SME customers for the new DIGITAL normal: role of APIs and Microservices in the current climate
SMEs in particular, hard hit by physical constraints, acutely need to be digitally connected and empowered towards identifying sales and service opportunities.
On-demand webinar
Microservices Governance: Best practices for CSPs
A well-thought-out governance approach can help offset the costs of implementing Microservices and deliver higher benefits from your investments.
On-demand webinar
Practical use-cases to monetise Open Banking APIs
In this webinar, Thomas Zink – IDC research director for European financial services talked about the revenue potential of API enabled use-cases and how to overcome barriers to adoption.
Close

Accelerate legacy modernization. Use legacy to cloud-native kit

Contact us today!