Vendor lock-in and proprietary black boxes¶
Europe loves a good mystery. Preferably one involving a labyrinth, a dragon, and a contractual clause written during the Bronze Age. Unfortunately what we actually have are vendor systems that behave like enchanted artefacts in a Pratchettian bazaar. They work only when the vendor priests are in a good mood, they cannot be opened without voiding several warranties, and they have a habit of exploding on a timetable known only to themselves.
From industrial control systems to cloud ecosystems, we are surrounded by magnificent machinery that we ostensibly own, yet somehow may not touch. You cannot see the code. You cannot adjust the configuration. You may only patch it with the vendor’s blessing, which tends to be bestowed as rarely as unicorn sightings. Vulnerabilities flourish, unknown and unfixed, and customers are expected to simply carry on, bravely clutching their risk register like a comfort blanket.
The gilded cage¶
You bought the system. You paid for it. It lives in your rack, consumes your electricity, and screams like a gremlin when it malfunctions. Yet if you try to prod it with a screwdriver or question its behaviour, the vendor materialises like an offended wizard and reminds you that you do not really own it. You merely rent the privilege of its presence.
The contract kindly informs you that you may use the software, but you may not peek inside it, reverse engineer it, modify it, integrate it with anything the vendor has not personally shaken hands with, or even think too loudly about how it works. Support costs extra. Updates cost extra. Security apparently costs extra, although no one will admit this in polite company.
In practice this means that when something breaks you file a ticket, then wait. The vendor may fix it. Or they may place it on their roadmap which has the geological pace of a very shy glacier. An unaddressed vulnerability may linger long enough to qualify for a pension. If you complain you will be offered a new product line with a suspicious resemblance to your current one, only at twice the price and redesigned in a colour that no one asked for.
The black box problem¶
You cannot audit what you cannot see. Proprietary software is the digital equivalent of a sealed coffin that the vendor assures you is definitely empty and definitely safe, but which they absolutely will not let you open. For your protection, naturally.
You would like to know what the system collects. Where it sends it. Whether the encryption is modern or medieval. What happens when a process collapses in a heap. Who can access the logs, assuming it logs anything at all. When you ask, the vendor replies with practised serenity that everything is proprietary information, which is industry shorthand for we will not tell you because it is held together with brittle code and prayer beads.
Take medical devices. Hospitals install expensive imaging equipment that comes with known vulnerabilities. The vendor refuses to patch it because the device is certified in its current state and any changes would disrupt that certification. The hospital may run insecure software or abandon the device. They usually run it insecurely, isolated as best they can, while hoping the ransomware gangs are busy elsewhere. Eventually something gets through, because it always does. The vendor responds by recommending an upgrade, which just happens to cost more than a house in Surrey.
Industrial control systems, the ancient horrors¶
Deep in the bowels of Europe’s critical infrastructure lurk industrial control systems from the era when computers were large beige beasts that smelled faintly of ozone. These systems were never intended to be connected to networks, let alone the internet. Then someone decided that networking everything would be efficient and marvellous. No one updated the security. They merely attached TCP IP like a sticky plaster to a creature that wanted no part of it.
The result is machinery that runs on long discontinued versions of Windows, communicates in proprietary dialects understood only by long retired engineers, and refuses to tolerate patches for fear of breaking its delicate constitution. The vendor chain has usually been acquired multiple times, leaving the software owned by a holding company that specialises in extracting value rather than fixing anything.
You cannot patch it. You cannot update it. You cannot audit it. You can only pay extortionate support fees and hope that the next global malware wave overlooks your power station.
When the vendor goes out of business the situation becomes farcical. A private equity firm buys the carcass, sacks the staff, and sells off the intellectual property to whoever fancies a bargain. Your only certified control system is now orphaned. Replacing it would cost millions. Continuing to run it is madness. You do both.
Cloud lock in, the modern variant¶
Cloud platforms are wonderfully convenient. You can deploy servers in minutes, scale at will, and appear impressively modern at board meetings. The cost is that you eventually discover your entire infrastructure has become an AWS shaped religion.
Everything has a proprietary service, proprietary configuration, proprietary naming conventions, and proprietary ways to bill you for things you did not know you were using. If you embed yourself too deeply you will find that migration to another provider requires time, money, new skill sets, and several soothing beverages.
Cloud providers know exactly how sticky their ecosystems are. Free tiers lure you in. Platform specific features keep you there. One day the prices creep up and you discover that leaving would involve a migration so convoluted it should come with a Minotaur.
The same applies to SaaS. Organisations adopt Microsoft 365 because it is convenient. Years later prices rise. Alternatives exist but the cost of moving is astronomical. Export tools crawl at the speed of a tired slug. Integrations would need rewriting. Staff would need retraining. Procurement does the maths and quietly signs the renewal.
The configuration prison¶
Many proprietary systems trumpet their configurability, which usually means they allow cosmetic changes that have no bearing on the bits that actually matter.
You may modify interface colours and tweak a few workflow options. Anything related to security, architecture, performance, or meaningful customisation is locked behind vendor support contracts that cost more than a small library. Many options are undocumented relics of ancient development cycles, and you are strictly forbidden from touching them unless supervised by a vendor consultant armed with a day rate that could fund a minor expedition.
Security improvements tend to appear on roadmaps with optimistic delivery dates three years into the future. Auditors frown. Vendors shrug. Organisations apologise and move on because there is no alternative.
The patch hostage situation¶
Patches exist. Sometimes. To apply one you must perform ritual sacrifices in the form of licence upgrades, hardware replacements, mandatory consultancy, and extended maintenance windows. Each patch is an adventure that threatens to break all your integrations at once.
As a result many organisations put off patching until the next budget cycle. Which they miss. Which places them another cycle behind. Eventually a breach forces an upgrade anyway. The vendor profits from the support contract the whole time.
The geological update cycle¶
When researchers discover vulnerabilities the responsible vendors act with all the urgency of a sloth contemplating a crossword. A year may pass between discovery and patch. Attackers exploit the flaw within days. The entire security ecosystem collapses into farce.
Some vendors are even slower. ICS vendors release updates twice a year, no exceptions. If your vulnerability appears right after a scheduled update you can enjoy five months of existential dread. Medical device vendors often refuse updates entirely unless you buy the next model.
The audit impossibility¶
Regulators want proof that your systems are secure. Auditors therefore descend on your organisation asking for documentation that probably does not exist.
You cannot provide source code. You cannot provide architectural diagrams. You cannot run penetration tests without vendor permission. You cannot demonstrate encryption because no one will tell you what it is. The audit concludes that the system cannot be verified. You nod, file the report, and prepare for the same conversation next year.
The vendor bankruptcy scenario¶
When a vendor goes bankrupt you discover how truly alone you are. Support evaporates. Documentation becomes archaeological evidence. Replacement becomes urgent and painfully expensive.
Local councils know this story well. One such council found its entire planning system orphaned when the vendor collapsed. They limped along with unsupported software on unsupported servers while scraping together a budget for a replacement. Nothing catastrophic occurred, which they interpreted as success. Realistically it was blind luck.
The integration nightmare¶
Modern organisations have dozens of systems that must cooperate. Proprietary protocols get in the way. Integration tools cost extra. Middleware breaks whenever anyone sneezes.
Each vendor offers its own integration apparatus that works only within its ecosystem. The result is a nesting doll of incompatible tools all shouting at one another through translation layers, none of which are documented, all of which are fragile.
Open standards would solve this. Vendors prefer profit.
The known vulnerabilities that stay unfixed¶
Public CVE databases are full of issues that vendors have chosen not to fix. Sometimes the affected product is end of life. Sometimes the vendor pretends the issue is a feature. Sometimes they simply ignore it.
Organisations document these risks and soldier on because migration is impossible. They add compensating controls that would not satisfy a determined toddler, let alone a motivated attacker. And they hope.
The unknown vulnerabilities that stay unfindable¶
Buried inside every proprietary system are bugs waiting to blossom into future headlines. No one can find them because no one is allowed to audit the code. Vendors forbid independent testing. Automated tools cannot interpret proprietary protocols. In the end the attackers find them first.
You learn of the flaw only after your data has left the building. The vendor expresses shock and pledges to do better. They often do not.
The open source alternative nobody chooses¶
Open source solutions exist. They are transparent, auditable, modifiable, and often excel in security. Yet many organisations recoil because there is no corporate scapegoat to sue, no glossy brochure, no reassuring branding. Procurement teams fear responsibility more than risk.
Ironically many proprietary systems contain vast amounts of open source software hidden under licensing restrictions. Customers pay for the privilege of ignorance.
The regulatory gap¶
What it looks like? Regulators speak in noble generalities about appropriate technical measures. Vendors exploit every gap in the law that does not explicitly require them to be transparent, patch quickly, or avoid lock in. They comply with the letter of the law while trampling its intent.
Regulators nod gravely. Vendors insist their software is secure. Organisations repeat the assurance because no alternative exists. Attackers, meanwhile, get on with the job.