Do you (or your suppliers) have any systems that don't yet support dates beyond 1999? Do you have any business risks or opportunities for the millennium that may create new demands on your systems?
Would it help you manage your systems through the Year 2000 problem if you could chunk your legacy applications and data stores into separate subsystems with clean interfaces?
If you had a reliable way of managing the interfaces and data flows between compliant and non-compliant subsystems, would you be able to test and implement millennium-compliant programs and data stores in a step-by-step manner? Wouldn’t this make the management and control of Year 2000 conversion projects easier? Wouldn’t this give you a good basis for any future conversion (such as Euro)?
Do you realise that what we've just described is an example of Component-Based Development? Now read on ...
|Introduction||What is required to convert systems for CBD and Euro?||What is CBD?||Method|
|Technology||Case Study||Benefits||Contact us|
Legacy code is often compared to spaghetti, but lovers of Italian food will recognize that pizza provides a much better analogy. You try to cut out a wedge of pizza, but it remains connected to the rest of the pizza by innumerable strands of elastic cheese.
In conversion projects such as Year 2000 or Euro, the task is to get the entire legacy portfolio compliant with a specific new requirement, such as four-digit dates or a change of currency. The challenge is to break this task into manageable chunks.
A chunk of legacy system makes a component. Each chunk needs to be tested and implemented separately. Clean interfaces need to be defined between the chunks, so that converted chunks can interoperate with unconverted chunks. Management will be reassured when they see a growing number of converted chunks successfully tested and incorporated into production systems.
As each chunk of legacy system is converted, it is provided with two parallel interfaces: one for communicating with unconverted chunks, and one for communicating with converted chunks. This is achieved with 'intelligent' bridging components.
The alternative is a very high-risk strategy: big bang conversion, where the entire legacy portfolio is converted from non-compliance to compliance in a single overnight integration test and implementation.
Therefore the only low-risk strategy for Year 2000 and Euro conversion projects is to have something like component-based development (CBD).
Many people regard Year 2000 conversion as simply a programming task: to find all the date references in the code and fix them. They argue that any higher level analysis is both a distraction (given the urgency of the task) and an impossibility (given the poor structure of the systems and the unreliability, incomprehensibility or sheer absence of documentation).
But most people would accept the need for some form of testing before the altered programs are returned into the production environment. This testing needs to include system/integration testing - does this program still work in conjunction with other programs - as well as unit testing.
But here's the difficulty. If you don't know what a system is supposed to do, you can’t test it. The ability to conduct a meaningful test implies a specification of the system. And if you don't know what separate subsystems are supposed to do, you can only test the whole system as one large lump. In such circumstances, debugging is a hit-and-miss affair, as likely to add bugs as to remove them.
Therefore, if your legacy systems are poorly structured and lack good documentation, some form of analysis is all the more necessary. The analysis we recommend concentrates on creating a relatively small number of large components with a small number of access points into each one.
In an earlier section, I compared legacy code with pizza. Some software engineers prefer to compare legacy code with lasagne. What were once separate layers of cheese, pasta and other ingredients have now been melted into a solid mass.
When we look at the legacy systems of a large company, we are usually presented with what appears to be a list of application systems. But this list can be misleading. The items in the list refer to the original system development projects - the layers of the lasagne. The systems have usually grown and changed, in functionality and architecture, often to the point where the original names no longer seem appropriate. New data stores, or interfaces to remote data stores, may have been added adhoc.
Even if the applications were originally designed according to a well-thought-out architecture, with maximum cohesion and minimum coupling, evolution of the applications over time may have greatly reduced the cohesion within each application and increased the coupling between applications.
So the segmentation of legacy code may not follow the apparent boundaries between the legacy applications. We need to identify ways of carving up the code with maximum cohesion and minimum coupling. Sometimes you can improve the situation by reducing connectivity between subsystems.
The idea is to convert an interlocking portfolio into a manageable set of segments connected by defined interfaces with bridges at each crossing point.
Component-Based Development offers a radically new approach to the design, construction, implementation and evolution of software applications. Software applications are assembled from components from a variety of sources; the components themselves may be written in several different programming languages and run on several different platforms.
Each component communicates with other components through clearly specified interfaces. A component provides services, which may be used by other components, or by users. Components may be reconfigured, replaced or reprogrammed, as long as they continue to provide the same services to the same level of quality. This gives considerable flexibility to the systems architect, to evolve the application portfolio while maintaining and improving levels of service to the user.
At first sight, Component-Based Development might seem to be little more than a fashionable new label for some traditional software ideas: modular programming and subroutine libraries. Even in the 1960s, these ideas promised high levels of software reuse (although this was rarely achieved). But to the extent that CBD is a genuine innovation, this is to be found in its approach to legacy systems: some of the most significant potential cost-savings associated with CBD involve extracting (or ‘mining’) components from existing code. It is perhaps this element of CBD that arouses the greatest scepticism, and offers the greatest potential rewards.
This means that subsystems of existing legacy systems can be wrapped to form components.
The segmentation into components and controlled piecemeal conversion applies to the database as well as to the application programs. If you followed the technological imperatives of the 1980s and consolidated all your data into a single mainframe database, you may now need to consider dividing the data storage into smaller, more manageable chunks. Fortunately, client/server technologies now exist to make this possible.
In some companies, you may find this has already been done. Although the logical data model is a single flat entity-relationship diagram, the database designers have installed database firebreaks to make it easier to carry out database housekeeping tasks. Furthermore, new applications built using client/server technology may already have been designed with distributed data storage.
Our method for system conversion is in four overlapping (and sometimes iterating) parts.
Read the table clockwise from top left. We start by assessing the current situation, and end by assessing what we’ve achieved (and what remains to be achieved).
Note: we use UML-style diagrams to represent the legacy systems as a set of collaborating subsystems.
This may mean writing a small amount of new code to combine several flows, transactions and disparate databases into a single flow, transaction or database.
When we have converted a portion of legacy system, it communicates with other portions of legacy system exclusively through interface bridges. We use an 'intelligent' bridge, which selects the appropriate interface and performs conversion where necessary. The intelligent bridges also connect converted and non-converted data stores.
For Year 2000 conversions, the bridge will convert 2-digit years into 4-digit years where necessary, using your selected windowing technique, and will truncate 4-digit years where necessary for communication with subsystems that have not yet been converted. For Euro conversions, the bridge will convert local currency transactions into dual currency transactions and/or Euro-only transactions, and back again, as required.
Particular attention has to be given to situations where the data are being retrieved or sorted by date or financial amount. The bridge needs to handle data retrieval and sorting, as well as simple data flow. Bridges may also need to deal with data compression algorithms and variable-length records.
This bridge has to be designed to reflect your technical architecture. For a template requirements specification of this bridge, see below.
CBD tools provide several aspects of support for this solution:
One organization that has successfully implemented the intelligent bridging approach is Healthsource-Provident Administrators Inc. (HPA), a $2.3 billion health insurance enterprise with 2,100 employees.
HPA resulted from a 1995 merger between Healthsource Inc. and Provident Life & Accident Insurance Company (PLAIC). The merged company inherited over 12000 COBOL programs, 6000 data files, and 303 shared databases, using a variety of data access methods (IMS, IDMS, DB2 and VSAM). Recent project experience suggested that it would take six months to convert each file using normal methods. (At this rate, the company would just about be ready for the sixth millennium.)
Furthermore, the 303 databases were shared across four distinct business organisations: Claims, Management, Reporting, and Finance. In most cases, it was discovered that databases created and maintained by one group were being used by two or more other business organisations, resulting in additional complexity and interdependency.
Faced with the impossibility of a normal conversion, HPA reviewed the available techniques for short-circuiting the conversion and decided upon the implementation of intelligent bridges.
"Having recently completed a conversion requiring six months for a single file, we knew that nothing short of a major breakthrough would allow us to convert 6,000 files in eighteen months. Reality struck when we asked, "Can our organization convert a file, and all the programs that use it, in 30 minutes, or 80 per week?" Clearly, the answer was, "No." We were, in effect, looking into the eyes of business failure. Literally, we were confronted with the realization that our systems, our company, and our personal finances were all at great risk . . . For us, [intelligent bridging] made survival possible." Paul D. McMahan, chief information officer of HPA
Following the success of this approach in HPA, a spin-off company called Bridging Data Technology Inc. has been formed to market intelligent bridgeware to other companies, under the tradename SmartBridgeÔ .
[For more information on this case study, see Chapter 4.5 of "Year 2000 Problem: Strategies and Solutions from the Fortune 100" by Leon Kappelman Ph. D. with the SIM Year 2000 Working group, and Friends. Published 1997 by International Thompson Computer Press. Or visit the Bridging Data Technology website at http://www.bridging.com]
|Project Management||The conversion task is divided into separate chunks. Conversion, testing and installation can be more easily scheduled and controlled. Progress with the conversion is more easily visible to management.|
installation of defined portions of converted system reduces the risk of
a big-bang installation at the end of the conversion.
If problems do arise in production, only the affected components need to be backed out. The remaining components will revert to using the old interfaces with the backed-out components.
|Change Management||During a conversion
project, there are always other changes in requirements. Some conversion
projects simply veto these, or try to postpone them all until the conversion
is complete. This is unrealistic, although it provides a convenient excuse
for slippage in the conversion schedule.
In many organizations it will be impossible to resist changes to the systems in parallel to the conversion. The longer the converted code sits waiting to be implemented, the more likely it is to get out of step with the production code. When the time finally comes to put the converted code into production, the chances are that this installation will inadvertently remove system enhancements that were carried out for urgent business reasons since the code was converted.
But our approach puts the converted code into production as soon as it is ready. There is then no reason to prevent other enhancements to the converted code, provided that the proper disciplines and controls are in place.
|Future System Maintenance||At the end
of the conversion project, the legacy systems are in much better shape
for future maintenance and enhancement.
If reengineering and replacement of legacy systems is still desired, this can now be undertaken in a controlled manner, without the artificial urgency of Year 2000 conversion.
Major conversion projects involve a high business risk. The approach described here may help you reduce this risk, but only if you do things properly. This document does not imply the acceptance of any responsibility for projects attempting to follow this approach.
This material has been developed in association with Ade Bamigboye of Supernova, Michael Mills of Kamm Associates and Steve Mills (no relation) of M-Consulting.
The following list of requirements is based on the work of Bridging Data Technology.