This is an archived page and is no longer updated.
Current content of Prof. Ladkin's research group is available at

Year 2000: Worse than it Seems

Year 2000: Worse than it Seems

Martyn Thomas

Chairman Emeritus, Praxis Critical Systems
Former Global Director, Year 2000 Services
Deloitte and Touche Consulting Group

© Martyn Thomas 1998. This article appeared in Parliamentary Brief, Spring 1998. Reprinted by permission.

Most directors and managers have heard about the "millennium time-bomb", but many of them still believe that it is only a problem for older data-processing systems. They are wrong. Year 2000 problems span the entire business, from the factory floor to the executive washroom and from the supply chain to product liability. For most companies, resolving their year 2000 issues will either be the largest project they have ever undertaken, or the last.

It is true that year 2000 affects data processing systems, many of which will be older mainframe systems, but this is only the start of the problem. Newer client-server systems are likely to fail. Spreadsheets are likely to fail. Desktop systems of all sorts will need to be checked if they are important to the business and if they are sensitive, in any way at all, to the calendar. But it is the hidden computer systems that represent the greatest problems in most organisations.

In thirty years, microprocessors have penetrated most areas of business life. There are stock control systems, automated assembly lines, lift controllers, time locks, alarms, instruments, credit card readers and security systems. All these and more need to be checked. There is embedded logic in many products that needs to be checked too.

Year 2000 failures are already occurring. A retail chain had problems when a consignment of corned beef arrived that had a shelf life that went beyond 2000 (because a product that expires in "02" seems well out of date when the current year is "97"); they had to replace the stock-control system. A pharmaceutical company that packs medicines for dispatch in order of sell-by date found that all "2000" products were dispatched ahead of all "1999" (because "00" is less than "99"). Sites with 999-day expiry dates on backup tapes experienced failures on April 7th this year. Critical hospital equipment and automated assembly lines have failed when tested with next-century dates. Some time locks are known to fail, and to leave bank vaults locked for up to 80 years. The rate of failures will increase as time passes until, on widely quoted Gartner Group estimates, 80% of date-sensitive systems fail by 1999.

Some failures may be happening unnoticed, because the particular fault is causing incorrect data to be stored silently, creating problems that will come to light later. Programs that use "99" in the date field as an end-of-list marker are a good example of this class of fault; the first genuine 1999 year will stop the processing and all subsequent records will be ignored. This may not be obvious at first, and may trigger other incorrect processing—for example, if goods inwards records are ignored, components may be re-ordered from suppliers unnecessarily.

Time is short and it may already be too late to implement the best solutions. Even in the simple cases—for example, an accounts system with no look-ahead that will first fail in January 2000—a prudent company will want to avoid processing a century-end with modified software that has never successfully processed a normal year-end. That decision means that the software must be working by January 1999. If the company wants to process a quarter-end before risking a year-end, the date moves back to October 1998. A typical project needs 50% of the elapsed time for testing, so the new system needs to be ready for testing around Christmas 1997! The work will need extra resources, too: staff with appropriate skills and extra hardware for the development, testing and parallel running (potentially of all the company’s critical systems simultaneously). The cost of these resources is going up; reports from America suggest that contract rates for COBOL programmers have doubled in the past four months. Just when more staff are needed, the permanent team may be tempted to leave.

The view from the boardroom is gloomy: the problem is larger than it seems, very expensive, urgent and, worst of all for the Board, raises issues of liability to shareholders, customers and suppliers if anything goes wrong.

The scale of the problem

In technical terms, these problems are trivial and can be fixed cheaply. The impact of year 2000 comes from the pervasiveness of computer systems that need to be checked and corrected, and from the poor professional practices followed by most software developers, which mean that often the up to date programme source for important systems cannot be located, or that test suites are incomplete. In the case of electronic equipment, there may be no one in the organisation who has acknowledged responsibility for checking that it will continue to work; the manufacturer may offer no support, or be untraceable.

The problem is exacerbated by several factors:

It is the first time that many computer systems, installed and working well, may all fail simultaneously.

It is a common point of failure for all customers and suppliers.

The deadline for correcting the systems is immovable and shared with the whole world.

The resources to solve the problem are inadequate and needed by many organisations simultaneously.

Testing will typically take 50% of the elapsed time, and will need scarce human and hardware resources.

The scale of changes will be large: typically 70% of systems need changing and 6% of all lines of code need to be modified. Changes on this scale will introduce other errors.

What should be done?

The first step is to create awareness of the scale and criticality of the issue, and to appoint someone, reporting to the Board, to drive an organisation-wide programme. Their first action should be to issue firm instructions that ensure that no further systems are acquired unless they are guaranteed to work up to, during and beyond 2000.

The next step is to create an inventory of computer-based systems and equipment, and to identify for each system its criticality to the business and the cost, timescale and appropriate strategy for ensuring that it is, or becomes, millennium compliant.

The order of magnitude costs should be reported to the Board, who should allocate resources and, if necessary, stop other competing programmes.

Then the plans can be developed in more detail and implemented, allowing plenty of time for testing.

Most importantly, organisations should develop contingency plans because there can be no guarantees that all the problems will have been found and fixed in time.

What will it cost?

The main banks are forecasting £50m to £100m each, based on the inventories they have developed. BT is forecasting £350m. The Automobile Association published an estimate of 61 years of effort. The total estimate for the UK varies from £10 billion (which seems to ignore the electronic equipment costs) to £31B (an estimate by Robin Guenier, who leads the DTI TaskForce 2000). If only £10B is spent, it will be completely inadequate and the costs of failing to address the problem properly will greatly exceed this figure.

What is likely to happen?

Many companies will not have checked or finished correcting their systems in time. These companies will suffer disruption and commercial damage. Some of them will probably be driven out of business.

Some companies will suffer sooner, either because of actual failures or because they cannot convince their auditors, the city, their insurers, their customers or their banks that they have tackled the problem effectively.

Some companies will be damaged by their responsibilities to their customers. Outsourcing companies are at serious risk if their contracts do not exclude updates to cope with 2000; product suppliers and software houses are also very vulnerable following the decision in St Albans v ICL. Most law firms already have Year 2000 litigation teams.

The national and global impact is unclear, and predictions can only be opinions. The world has never faced a problem of this sort before. Let us hope that most organisations act in time, that the consequences are not too severe, and that we learn the lesson about better software engineering in the future.

© Martyn Thomas 1998. This article appeared in Parliamentary Brief, Spring 1998.

Martyn Thomas is Chairman Emeritus of Praxis Critical Systems Ltd. He can be contacted at: