Legacy systems that refuse to die¶
Europe runs on IT infrastructure that groans and creaks like an Ankh-Morpork tenement, held together with twine, optimism, and a few whispered curses.
Public services still limp along on twenty-year-old middleware, hospitals operate medical devices that cannot be patched without voiding the warranty, and railways and energy systems rely on vendors who disappeared long before GDPR was a twinkle in anyone’s eye.
Governments favour what they quaintly call “strategic delay,” pushing modernisation back year after year until the risk becomes existential. Then, as one might expect, the bill arrives with compound interest and a frown from whoever still believes in paperwork.
The real cost of “if it ain’t broke”¶
In the public sector, the phrase “it is still working” is perhaps the most expensive lie ever told. That Windows XP machine keeping the town’s electricity humming along functions perfectly, until it does not. Really??? Those still exist?
The unpatched medical imaging system from 2003 carries on until ransomware decides to throw a midnight party in patient records. The procurement cycle often unfolds like a tragicomedy: the system is installed with a five-year support contract, the vendor offers extended support at a premium, then disappears from update responsibilities entirely. For years the system survives on hope, duct tape, and whispered incantations, until a catastrophic failure finally forces an emergency replacement at ten times the planned cost.
Meanwhile, knowledge walks out the door. The COBOL wizard retires, the documentation remains trapped in a demolished filing cabinet, and the vendor is gone, dissolved, or spirited away.
Why legacy won’t die quietly¶
Legacy systems do not exist in splendid isolation. Ancient databases feed dozens of other systems, many undocumented, so touching one often causes the whole structure to collapse like a house of cards in an Ankh-Morpork wind. Certification requirements make upgrades even trickier. Medical devices, aviation software, and nuclear systems are certified to run only on specific versions, and attempting to update one component often triggers a need to re-certify everything at enormous cost and over many years.
Vendor lock-in compounds the problem: proprietary formats, closed APIs, and custom protocols mean that if the vendor vanishes, the code remains locked and migration requires rebuilding from scratch, yet budgets rarely accommodate such adventures. Politicians fund glossy “digital transformation strategies” but not the mundane work of infrastructure upkeep, and risk aversion masquerading as prudence transforms “we cannot risk downtime” into “we will accept catastrophic failure instead,” gambling that today is not the day the house of cards falls.
Hidden vulnerabilities¶
End-of-life systems leave every new vulnerability exposed indefinitely, providing attackers with low-hanging fruit. Compliance often becomes theatre: GDPR requires appropriate technical measures, yet organisations tick the box despite running Windows Server 2003, until the breach and subsequent fines arrive.
Fragile systems create accidental insider threats: when only two people understand how the system works, they become both the single points of failure and prime targets for social engineering. Supply chains turn into archaeology projects, as legacy systems depend on vendors who no longer exist and hardware that has not been manufactured in fifteen years, leaving engineers to scavenge eBay for vital components. Hospitals really do this.
Real-world nightmares¶
The NHS runs Windows XP not from choice but because critical medical devices refuse to cooperate with anything else. Upgrading the OS would require device re-certification, which costs more than the hospital’s entire IT budget. The solution is to segment the network and hope ransomware remains polite. Railways guide their trains with MS-DOS systems from the 1980s, supported by vendors bankrupt since 1997. Documentation is patchy, original engineers retired or dead, and the system cannot be switched off for testing without stopping every train for weeks. Core citizen services still run on early-2000s middleware, maintained by a shrinking pool of grey-haired specialists who demand consultancy fees that would make barristers blush. Each bug fix takes months because nobody knows what else might break.
Energy grids rely on SCADA systems installed in 2001, networked in 2008, and never patched, running with cleartext protocols and default passwords. Vendors suggest replacements costing tens of millions; budgets are zero. Engineers buy firewalls, pray, and hope.
What should happen but does not¶
Ideally, every system would have a planned obsolescence date and be replaced when time runs out, with enforced documentation standards, mandatory sunset clauses in contracts, and public sector open source mandates ensuring taxpayer-funded infrastructure belongs to taxpayers. This would require procurement teams who understand technology, long-term budgeting, technically literate lawyers, regulators with teeth, and politicians with the courage to challenge incumbent vendors, rarely seen, even in Ankh-Morpork.
What actually happens¶
Systems run until failure, prompting emergency consultancy contracts at extortionate rates, temporary fixes that become permanent, blame for “unexpected” failures everyone foresaw, and promises of lessons learned that are never applied. Survivors obsessively document, maintain relationships with the remaining COBOL wizards, operate shadow IT, practice incident response, and keep resumes current.
The generational knowledge gap¶
The crisis is human. Those who built the systems retire, replaced by staff versed in JavaScript and cloud platforms, not VAX assembly or AS/400 RPG. Knowledge dies unless documented, and even then documentation is rarely sufficient. Specialists who understand legacy systems can command extortionate fees, and perverse incentives encourage keeping obsolete systems alive indefinitely.
The existential questions¶
How much risk are organisations truly accepting? Not the risk in the PowerPoint deck, but the real risk. If a system fails tomorrow, what breaks, for how long, at what cost, and who suffers? Who is accountable when the inevitable happens, the CTO, the procurement team, the politicians? Nobody goes to prison. Perhaps they should. What is the trigger for action? Slow-motion disaster is insufficient. Does it take ransomware shutting down hospitals, a train crash, or pension payments failing before “we cannot afford to fix this” becomes “we cannot afford not to”?
Probably not any time soon¶
Legacy systems will probably not die until they damage something important enough to force change, or become completely obsolete due to real-world issues. Until then, we are apparently all passengers on infrastructure held together with twine, hope, and ever more expensive prayers.