Working in the COBOL mine
Developing with Legacy Systems – Part 2
The most common applications sector where the integration of long-standing legacy applications is a still vital requirement is, of course, the broad reaches of the financial services community. When such an application has established itself and proved not just its capabilities but its reliability and overall efficiency to the business those businesses are loath to change it. In the finance market, "if it ain’t broke don’t fix it" is still a good maxim where changing an application, let alone conducting a rip and replace exercise just because there is a newer alternative, carries with it the significant risks that any change can induce.
One of the most important languages underpinning many such applications is COBOL and, despite its age, it remains at the heart of many business critical applications. Analysts, Gary Barnett, Research Director of Ovum (quoted here) reported in 2005 that COBOL accounted for 90 per cent of all financial transactions, and most finance companies are well aware of its importance to them. But it is also the language of many other business applications (Barnett estimates 75 per cent of transactions generally) and many enterprises may be less aware that they have it as part of their IT infrastructure.
COBOL is also a language that has attracted much attention from software vendors looking to extend its reach to cover new platforms and environments and, since the earliest days of the classic Wintel-architected PC servers, Micro Focus has been driving just that process. The company now has a broad range of tools and solutions for most business and applications developer needs, such as Micro Focus Revolve, which supports the first stage in legacy modernisation, enabling users to understand existing legacy systems and the potential impact of changes to them. The latest version, V.7, supports SOA and new compliance initiatives by helping developers to identify and expose business processes buried deep inside legacy systems.
The key to the future with legacy COBOL applications is giving developers the flexibility to port applications to a range of different platforms. Micro Focus provides this with Studio for COBOL developers, which incorporates a comprehensive COBOL development environment for Windows, UNIX and Linux. The development environments support both the .NET Framework and Java EE, and Micro Focus provides COBOL compilers for almost every platform if you wish to take the re-deployment route – COBOL is the most portable of all languages, not excluding Java.
Micro Focus’ experience demonstrates a particular truth of legacy modernisation, namely that development staff will need to either learn to understand the working legacy systems in current use, together with the service-oriented technologies of the future, or find consultants that can provide such understanding. They certainly need to treat the advocates of "rip and replace" modernisation with caution, for often they only understand the latest "one size fits all" technology they’re promoting.
The important objective in this style of legacy modernisation is risk mitigation through re-use of what is known to work already; coupled to choice, going forwards. The enterprise can keep its legacy systems and maintain them in a more agile fashion, using a PC-based development environment, if that makes business sense; or move them to a new platform, perhaps if they are on a "burning platform" nearing the end of its supported life. It can even mine its legacy for business processes which can be input to the development of new software; or adopt some combination of all these approaches.
The first stage in the legacy modernisation process is to understand the business value embodied within legacy systems. This means that developers must give business domain experts (business analysts) access to the legacy, using tools that help them find their way around it at the business level. Some awareness of, say, COBOL and of the legacy architectures will be helpful but we aren't talking about programmers rooting around in code – modern tools can automate much of this analysis for staff working at a higher level.
Once the legacy systems are understood, they can be emulated on a different, cheaper, more agile platform. With modern technology, it is quite feasible to change and maintain even a traditional COBOL mainframe application on the PC, and then migrate the changes back to applications running on the mainframe.
However, legacy modernisation is a serious project, involving cultural change, so it has associated risks, which must be managed. An enterprise embarking on legacy modernisation needs a modernisation process, with internal champions both in top business management and in the technical area, as well as effective risk mitigation policies.
Legacy modernisation, then, is just one more example of where developers must learn to step out of the "technology" silo and consider `business’; they must be able to make fact-based modernisation choices driven by business imperatives. They must also avoid being driven by technology or, worse, vendor marketing agendas. The choices facing them are:
- Full “Rip and replace” – throw away legacy systems and redevelop them from scratch. Expensive and wasteful; but can deliver a high quality system reflecting modern business practice (if the development is run properly and legacy domain experts are still available) and gets rid of expensive old-fashioned platforms. This may be the only option if legacy is undocumented and automated legacy analysis tools aren’t available.
- Use of advanced replacement technology – similar to “rip and replace” in effect but with lower risk through the use of modern technology (such as Erudine, for example; or 4 GL frameworks such as Uniface from Compuware) to generate new systems efficiently. This assumes that domain experts are available; and/or that the legacy is well-structured enough to act as a “requirements spec”.
- The “SOA approach” – extract services from legacy and make them available to more modern applications. Assumes a degree of legacy quality (modular design and a “good practice” approach) – and there’s a chance that the legacy doesn’t work in an SOA environment.
- “Object Wrapping” – similar to the “SOA Approach” but at a lower level – wraps legacy as reusable objects. Assumes legacy quality and there’s an even bigger risk that the legacy wasn’t built to be used this way.
- Cross platform development - basically, migrate legacy to a more cost-effective platform using advanced tools and either transfer production to the new platform; or maintain on the new platform and transfer updated systems back to the original platform. Assumes legacy quality and doesn’t work on (isn’t available for) every platform.
And, in practice, developers may have to adopt aspects of all these approaches, depending on circumstances. It is now fashionable to recognise the quality of much of our IT legacy (if it is still in use after all these years it must be doing something right) but it is also possible to waste a lot of time trying to reclaim poorly documented and architected systems that just happen to work (as far as the business can tell).
So, the bottom line is that you should plan for legacy reclamation, considering all the options; but make sure that you have a contingency option for bailing out if reclamation really gets bogged down in bad legacy. And there’s another contingency to consider: if you find that your legacy has been getting the wrong answers for all these years, how will you manage the politics of suddenly getting the numbers right? Especially, if you’ve been reporting the wrong numbers to the regulators.
So, perhaps the real bottom line is that legacy reclamation isn’t a second-class project for tired developers. It is an important part of your IT process and needs access to your best, brightest and most flexible brains.