What could actually go wrong

The Patrician has learned through decades of governance that the most dangerous risks are not the dramatic catastrophes that everyone discusses at length but the mundane failures that nobody considered worth preventing until they occurred. The city’s sewers didn’t collapse through dragon attack or magical mishap but through decades of deferred maintenance and optimistic assumptions about load-bearing capacity. The banking crisis of ‘87 wasn’t caused by goblins or foreign invasion but by several people discovering that a system everyone assumed was robust had been held together by convention and collective willful ignorance.

When contemplating what could go wrong with current technology trajectories, the temptation is to imagine science fiction scenarios involving rogue AI, technological singularities, or other dramatic events that make compelling film plots. These scenarios are possible in the abstract sense that many things are possible, but they’re not particularly probable compared to the boring ways that complex systems typically fail. The realistic failure modes are considerably less cinematic but substantially more likely.

The Patrician’s approach to risk assessment involves asking not “what’s the worst thing that could happen” but rather “what are the boring ways this could fail that everyone is currently ignoring because they’re busy worrying about exciting problems or busy being optimistic about success?” This produces a rather different list of concerns than the usual technology risk discussions, but probably a more useful one for anyone responsible for actual decisions rather than thought experiments.

The boring infrastructure failures

Technology infrastructure has been optimised for efficiency at the expense of resilience, which means that mundane failures can cascade in impressive and expensive ways. The single points of failure that everyone knows about but hasn’t eliminated because redundancy is expensive will eventually fail, as single points of failure traditionally do, and the resulting disruptions will surprise everyone who knew about the risk but assumed it wouldn’t actually materialise during their tenure.

Cloud provider outages occur regularly and will continue occurring because perfect reliability is impossible at scale. Most outages are brief and affect limited regions, which is manageable. The concerning scenario is a major outage affecting multiple regions simultaneously through cascading failures, which hasn’t happened yet but becomes more plausible as systems grow more complex and more interdependent. When such an outage occurs, the enormous number of services depending on that cloud provider will fail simultaneously, creating disruptions far beyond the immediate infrastructure problem.

The semiconductor supply chain concentrated in Taiwan represents a geopolitical risk that everyone acknowledges and nobody can quickly fix. If Taiwan’s situation deteriorates through conflict, natural disaster, or other disruption, the global technology supply experiences crisis of unprecedented severity. This isn’t wild speculation but straightforward consequence of geographic concentration. The disruption wouldn’t be permanent because alternative manufacturing would eventually emerge, but “eventually” is measured in years while the disruption would be measured in weeks, which is an uncomfortable temporal mismatch.

Power grid capacity in regions with data centre concentrations could become constrained as AI infrastructure growth exceeds grid expansion. Electrical utilities plan capacity years in advance based on historical growth trends that didn’t anticipate AI power consumption. The gap between data centre power demand and available supply creates situations where utilities must choose between limiting new data centre construction or brownouts for existing customers. Either choice is problematic, and the problem is arriving faster than utilities can build additional capacity.

Subsea cable failures that handle most intercontinental internet traffic are individually manageable but concerning in aggregate. The cables have redundancy and occasional breaks are repaired routinely. The scenario where multiple cables fail simultaneously through accident, sabotage, or natural disaster creates substantial internet fragmentation that would take weeks to repair. Global commerce increasingly depends on international connectivity that we’ve become accustomed to assuming is permanent.

The Patrician notes that infrastructure failures are inevitable and the question is not whether they’ll occur but whether we’ve prepared adequately for their occurrence. His assessment is that we generally haven’t because preparation for unlikely events competes poorly for resources against immediate business pressures.

The economic disappointments

The more probable near-term failures are economic rather than technical. AI companies discovering that impressive technology doesn’t translate to sustainable business models, cloud providers discovering that massive infrastructure investments don’t generate projected returns, and investors discovering that technology enthusiasm doesn’t reliably produce financial returns. These failures are individually survivable but collectively create recalibrations that are expensive and disruptive.

The AI business model problem is that many applications are popular but unprofitable because inference costs exceed revenue per user. This is sustainable temporarily through venture funding but eventually requires either dramatically improved efficiency, successfully charging users more, or accepting that the applications aren’t economically viable at current technology costs. Many current AI applications will likely choose the latter, which means they disappear despite being technically impressive and widely used.

The infrastructure overbuilding scenario involves cloud providers discovering that their massive investments in AI data centres exceed actual demand. The providers are building capacity based on optimistic projections about AI adoption that might not materialise at predicted rates. Excess capacity means lower utilisation, which means lower returns on capital, which means pressure to reduce future investments and potentially write down existing investments. This is financially painful but not catastrophic except for the executives who approved the investments.

The valuation correction where AI startup valuations return to levels justified by business fundamentals rather than enthusiasm will disappoint investors and employees holding equity. Companies raising at billion-euro valuations with minimal revenue will face difficult choices between down rounds, selling at disappointing prices, or attempting independence despite challenging funding environments. Many will choose poorly visible failures through acquihires or quiet shutdowns rather than dramatic collapses.

The talent market correction when the AI enthusiasm moderates and companies discover they overpaid for mediocre talent disguised through credential inflation. The correction happens through layoffs, compensation stagnation, and reduced hiring rather than dramatic events, but it’s unpleasant for the people experiencing it and creates spillover effects throughout the technology industry.

The capital efficiency realisation where investors discover that throwing money at AI problems doesn’t reliably produce returns and that better due diligence about business models would have prevented expensive mistakes. This realisation arrives gradually through disappointing exits and fund returns rather than sudden revelation, but the cumulative effect is reduced funding availability for subsequent companies.

The Patrician observes that economic failures are more probable than technical catastrophes and that preparing for them involves conservative financial management, skepticism about projections, and maintaining alternatives rather than assuming current trends continue indefinitely.

The regulatory overreach scenarios

Governments discovering that technology companies have accumulated power and deployed capabilities without adequate oversight are responding with regulation. The risk is not regulation itself, which is probably necessary, but poorly designed regulation that constrains beneficial technology while failing to address actual harms. This is a policy failure rather than technology failure but affects technology development substantially.

The definitional overreach where regulations define AI so broadly that conventional software gets caught in requirements designed for frontier models creates compliance burdens that advantage large companies while strangling startups. The regulations might be well-intentioned but produce unintended consequences through poor definitions and excessive scope. This is currently happening with the EU AI Act, which attempts comprehensive regulation but creates substantial uncertainty about what actually counts as “AI” requiring compliance.

The incompatible requirements scenario where different jurisdictions impose mutually contradictory obligations forces companies to either segregate operations expensively by geography, comply with the strictest requirements globally at competitive disadvantage, or exit certain markets entirely. This fragmentation is already occurring through privacy regulation and will likely intensify with AI regulation as different countries pursue different approaches reflecting different values.

The innovation throttling where compliance costs and regulatory uncertainty reduce experimentation and deployment of new capabilities particularly affects smaller organisations lacking dedicated regulatory affairs departments. Large established companies can absorb regulatory compliance as overhead while startups find it consuming resources needed for product development. The regulation advantages incumbents over new entrants, which is opposite to policy objectives but consistent with how regulatory compliance costs typically operate.

The premature standardisation where regulations lock in current approaches before technology matures prevents development of potentially superior alternatives. Early AI regulation might inadvertently favour current architectures and approaches, making it difficult to pursue fundamentally different methods even if they’d be more capable or safer. This is challenging because waiting for technology maturity before regulating allows harms to accumulate, but regulating prematurely risks freezing suboptimal approaches.

The enforcement overreach where regulators pursue aggressive enforcement to establish credibility creates chilling effects where companies avoid entire application areas rather than risking regulatory action. This happened with GDPR where uncertainty about enforcement led many companies to be excessively conservative about data use even when applications would have been legally permissible. The same pattern could emerge with AI regulation.

The Patrician notes that regulatory failures are practically guaranteed because writing good regulation for rapidly evolving technology is extraordinarily difficult and because the people writing regulations rarely understand the technology as well as they believe they do. The question is whether the regulatory failures are tolerably bad or catastrophically bad.

The security scenarios that actually happen

Security failures in technology are inevitable and the question is whether we experience many small failures that are individually manageable or few large failures that are catastrophically disruptive. The trajectory is concerning because systems are becoming more complex and more interdependent while security practices are not improving proportionally.

The supply chain compromise where adversaries introduce vulnerabilities or backdoors into hardware or software components affects systems globally because components are used throughout the industry. This has happened repeatedly at small scales through compromised libraries or malicious packages. The concerning scenario is compromise of widely used components from major suppliers affecting enormous numbers of systems simultaneously. The Solar Winds breach demonstrated this pattern at substantial scale, and similar incidents are plausible for hardware components or critical software infrastructure.

The AI model poisoning where attackers corrupt training data or models to produce subtly wrong outputs in specific contexts is difficult to detect and potentially serious if the models are used for important decisions. The attacks might remain undiscovered for extended periods because the outputs appear normal in most cases and only fail in specific circumstances the attacker chooses. This is particularly concerning for open-source models where training data provenance is unclear.

The credential compromise at scale where adversaries gain access to authentication systems or databases containing credentials affects services throughout the internet. This happens regularly at company-specific scales through database breaches. The scenario where major identity providers or password managers are comprehensively compromised affects enormous numbers of services simultaneously because everyone reuses the same authentication mechanisms. The defences against this are improving but not improving faster than attack sophistication.

The ransomware escalation where attacks target critical infrastructure rather than individual companies creates systemic disruptions rather than isolated incidents. Attacking cloud providers, utilities, or communication infrastructure affects all downstream users simultaneously. The attacks are economically motivated but the collateral damage could be substantial if attackers misjudge and cause disruptions beyond their ability to reverse.

The zero-day vulnerability in widely deployed systems provides attack capability until patches are developed and deployed. The window between vulnerability discovery and comprehensive patching can be months, during which systems are exploitable. The scenario where multiple zero-days are used simultaneously or where zero-days in critical infrastructure components are exploited before patches exist creates security crises that are expensive and disruptive.

The Patrician observes that security failures are inevitable because attackers have advantages of choosing targets and timing while defenders must protect everything everywhere constantly. The sensible approach is assuming breaches will occur and preparing response capabilities rather than assuming prevention is perfect.

The trust and legitimacy erosion

Technology systems depend on users trusting them enough to adopt and use them. This trust is being eroded through multiple mechanisms that individually seem manageable but cumulatively create situations where users become skeptical of technology generally rather than just specific problematic applications.

The AI hallucination problem where models confidently generate plausible-sounding nonsense undermines trust in AI outputs generally. Users who experience AI providing incorrect information presented confidently become cautious about relying on AI for anything important. This is particularly problematic because the hallucinations are unpredictable and difficult to distinguish from correct outputs without domain expertise. As more people experience AI failures, trust in AI capabilities declines regardless of how often the systems work correctly.

The deepfake proliferation where synthetic media becomes indistinguishable from authentic creates epistemological crisis about what to believe. When any video or audio could plausibly be fake, the default assumption shifts from trusting media to skepticism about everything. This erosion of media trust has political and social consequences beyond technology itself but technology enabled the problem.

The privacy breach fatigue where constant revelations about data collection and misuse make users either resigned to privacy loss or increasingly hostile to technology companies. The resignation means users continue using services while resenting the companies providing them. The hostility means users adopt adversarial relationships with technology through blocking, lying, or minimising engagement. Neither response is healthy for the technology ecosystem.

The algorithm opacity where important decisions are made by systems nobody can adequately explain creates resentment even when decisions are correct. People want to understand why they were denied loans, rejected from jobs, or served particular content. When the answer is “the algorithm decided” without meaningful explanation, users feel powerless and become hostile to the systems making decisions about their lives.

The platform manipulation where users discover they’re being systematically manipulated through engagement optimization, algorithmic curation, or A/B testing creates cynicism about technology companies’ motives. The manipulation might be economically rational for platforms but it creates user bases that are engaged but suspicious, which is unstable long-term.

The Patrician notes that trust erosion happens gradually then suddenly when accumulated grievances reach thresholds that change behavior systematically. The technology industry is currently in the “gradually” phase but should be preparing for the “suddenly” phase that historical patterns suggest is inevitable.

The Patrician’s assessment

Looking at plausible failure modes with appropriate attention to boring risks rather than dramatic scenarios, The Patrician concludes that the most likely failures are economic disappointments, infrastructure disruptions, regulatory complications, persistent security breaches, and trust erosion rather than catastrophic technology disasters.

These failures are individually survivable and collectively manageable if organisations prepare adequately through financial conservatism, infrastructure redundancy, regulatory engagement, security investment, and attention to user trust. Most organisations will not prepare adequately because preparation competes poorly for resources against current business pressures and because humans are reliably bad at preparing for probable but uncertain future events.

The technology industry will therefore experience periodic crises as the boring failure modes materialise. AI companies will fail when business models don’t work. Infrastructure will have outages. Regulations will be poorly designed. Security breaches will occur. Trust will erode. These events will be presented as surprising despite being entirely predictable, and lessons will be learned temporarily before being forgotten when current crises pass and attention returns to growth.

The systemic risk is not any individual failure but the possibility of multiple failures coinciding or cascading. An infrastructure failure during an economic downturn during a security crisis during regulatory uncertainty creates compounding effects worse than the sum of individual problems. The probability of such coincidence is difficult to estimate but nonzero and arguably increasing as systems become more complex and more interdependent.

The sensible response is not abandoning technology, which is impractical, but managing risks through diversification, redundancy, financial conservatism, and skepticism about claims that current arrangements are more robust than historical patterns suggest. This approach lacks the optimism that makes for compelling business plans but has the considerable advantage of occasionally being correct about the messiness of reality compared to the tidiness of projections.

The Patrician has observed that most catastrophes are predictable in retrospect and that the consistent pattern is that the warnings were available but ignored because taking them seriously would have been inconvenient, expensive, or politically difficult. The boring failure modes discussed here are similarly being ignored in favor of assuming current trajectories continue smoothly or in favor of worrying about dramatic scenarios that are less probable but more interesting.

When the failures occur, and they will because they always do, the response will be surprise despite the predictability, emergency measures that should have been routine preparation, and eventual return to optimistic assumptions once the immediate crisis passes. This is how complex systems managed by humans typically evolve, through cycles of enthusiasm, crisis, response, and renewed enthusiasm rather than through steady prudent management that prevents crises before they occur.

The technology industry is currently in an enthusiastic phase with optimistic projections and minimal attention to failure modes. The boring failures will arrive in their own time, probably sooner than the dramatic failures everyone discusses but later than skeptics predict. The industry will survive these failures because industries typically do, but individual companies, investors, and users will experience disruptions that could have been mitigated through preparation that seemed unnecessary until it became urgently needed. This is the normal course of events and should surprise nobody who has paid attention to how technology cycles typically proceed, though it will nonetheless surprise nearly everyone when it arrives.