What could actually go wrong¶
The Patrician has learned through decades of governance that the most dangerous risks are not the dramatic catastrophes that everyone discusses at length but the mundane failures that nobody considered worth preventing until they occurred. The city’s sewers did not fail through dragon attack or magical mishap but through decades of deferred maintenance and optimistic assumptions about load-bearing capacity. The banking crisis of ‘87 was not caused by goblins or foreign invasion but by several people discovering that a system everyone assumed was robust had been held together by convention and collective wilful ignorance.
When contemplating what could go wrong with current technology trajectories, the temptation is to imagine science fiction scenarios involving rogue AI or technological singularities, which make compelling film plots and poor risk assessments. The realistic failure modes are considerably less cinematic and substantially more probable.
The Patrician suggests asking not “what is the worst thing that could happen” but rather “what are the boring ways this could fail that everyone is currently ignoring because they are busy worrying about exciting problems or busy being optimistic about success?” This produces a different list of concerns, and probably a more useful one.
The boring infrastructure failures¶
Cloud provider outages occur regularly and will continue occurring because perfect reliability is impossible at scale. Most outages are brief and affect limited regions. The concerning scenario is a major outage affecting multiple regions simultaneously through cascading failures, which becomes more plausible as systems grow more complex and interdependent. When this occurs, the enormous number of services depending on a single provider will fail simultaneously. Everyone will discover their disaster recovery plans were optimistic about recovery time.
The semiconductor supply chain concentrated in Taiwan represents a geopolitical risk that everyone acknowledges and nobody has quickly fixed, because the fix requires building competitive fabrication capacity elsewhere, which costs tens of billions of euros and takes years. Everyone is aware of this. The pace of the mitigation is constrained by the fact that it is expensive, slow, and currently a worse option than the problem it addresses.
Subsea cables carry most intercontinental internet traffic. Occasional breaks are repaired routinely. Multiple cables failing simultaneously through accident, sabotage, or natural disaster would fragment the global internet for weeks. Global commerce has developed dependence on international connectivity that we have become accustomed to treating as permanent. It is not permanent. It is infrastructure.
The Patrician notes that infrastructure failures are inevitable and the question is not whether they will occur but whether preparation is adequate. His assessment is that it generally is not, because preparation for unlikely events competes poorly against immediate business pressures.
The economic disappointments¶
The AI business model problem is that many applications are popular but unprofitable because inference costs exceed revenue per user. This is sustainable temporarily through venture funding. Eventually it requires either dramatically improved efficiency, successfully charging users more, or accepting that specific applications are not economically viable at current technology costs. Many current AI applications will choose the latter. They will disappear despite being technically impressive and widely used.
The infrastructure overbuilding scenario is no longer purely a scenario. In early 2026, Azure growth hit what analysts described as an infrastructure wall, causing Microsoft’s shares to fall as the scale of AI data centre investment began visibly exceeding what demand could absorb. Excess capacity means lower utilisation, which means lower returns on capital, which means pressure to reduce future investments and potentially write down existing ones. This is financially painful and not catastrophic except for the executives who approved the investments. They approved them with considerable confidence, which is worth remembering when the same executives explain that the situation was unforeseeable.
AI startup valuations will return to levels justified by business fundamentals rather than enthusiasm. Companies that raised at billion-euro valuations with minimal revenue face difficult choices. Many will choose acquihires or quiet shutdowns rather than dramatic collapses. The failures will be quiet. The obituaries will be brief. The lessons drawn will be insufficient.
The regulatory complications¶
Regulatory overreach is when poorly designed regulation constrains beneficial technology while failing to address actual harms. This is a policy failure rather than a technology failure, but it affects technology development substantially. The definitional overreach where regulations define AI so broadly that conventional software gets caught in requirements designed for frontier models creates compliance burdens that advantage large companies while strangling startups. The EU AI Act is currently creating substantial uncertainty about what counts as AI requiring compliance, which is an early indicator of this pattern.
Incompatible requirements from different jurisdictions create situations where companies must either comply with all requirements everywhere at considerable expense, or exit markets where compliance is uneconomical. This fragmentation is accelerating. It will continue. Meaningful international coordination would require different political systems with different values to agree on shared standards, which is possible in principle and rare in practice.
The Patrician notes that regulatory failures are practically guaranteed because writing good regulation for rapidly evolving technology is extraordinarily difficult, and because the people writing regulations rarely understand the technology as well as they believe they do. The question is whether regulatory failures are tolerably bad or catastrophically bad.
The security scenarios that actually happen¶
The supply chain compromise where adversaries introduce vulnerabilities into widely-used software or hardware components affects systems globally because the components are used throughout the industry. This has happened at small scales repeatedly. The SolarWinds breach demonstrated the pattern at substantial scale. Similar incidents are plausible for more foundational components.
AI model poisoning where attackers corrupt training data or models to produce subtly wrong outputs in specific circumstances is difficult to detect and potentially serious if the models are used for important decisions. The attacks may remain undiscovered for extended periods because outputs appear normal in most cases. This is particularly concerning for open-source models where training data provenance is unclear.
Ransomware attacks targeting critical infrastructure rather than individual companies create systemic disruptions rather than isolated incidents. Attacking cloud providers, utilities, or communication infrastructure simultaneously affects all downstream users. The attacks are economically motivated. The collateral damage may substantially exceed what the attackers intended or can reverse.
Trust erosion¶
AI hallucinations, where models confidently produce plausible-sounding nonsense, undermine trust in AI outputs generally. Users who experience AI providing incorrect information presented with confidence become cautious about relying on AI for anything important. This is rational behaviour. It is also difficult to reverse once established.
Deepfakes have advanced to the point where synthetic media is increasingly indistinguishable from authentic. When any video or audio could plausibly be fabricated, the default assumption shifts from trusting media to questioning it. The epistemological consequences extend well beyond technology.
The Patrician notes that trust erosion happens gradually, then suddenly, when accumulated grievances reach thresholds that change behaviour systematically. The technology industry is currently in the gradual phase. Historical patterns suggest the sudden phase follows eventually.
The Patrician’s assessment¶
The most likely failures are economic disappointments, infrastructure disruptions, regulatory complications, persistent security breaches, and trust erosion. These are individually survivable and collectively manageable for organisations that prepare adequately. Most organisations will not prepare adequately because preparation competes poorly against current business pressures.
The systemic risk is not any individual failure but multiple failures coinciding or cascading. An infrastructure failure during an economic downturn during a security crisis during regulatory uncertainty creates compounding effects worse than the sum of the parts. The probability is difficult to estimate. It is nonzero and arguably increasing as systems become more complex and interdependent.
The Patrician has observed that most catastrophes are predictable in retrospect and that warnings were available but ignored because taking them seriously would have been inconvenient, expensive, or politically difficult. The failures discussed here are similarly being ignored in favour of assuming current trajectories continue smoothly or in favour of worrying about dramatic scenarios that are less probable but more interesting.
When the failures occur, and they will, the response will include surprise despite the predictability, emergency measures that should have been routine preparation, and eventual return to optimistic assumptions once the immediate crisis passes. This is how complex systems managed by humans evolve. The Patrician will be unsurprised. He is rarely surprised. He considers this, on balance, an advantage.