Power and dependencies

The technology industry has constructed a dependency structure of remarkable complexity and fragility, where enormous numbers of companies, applications, and services depend on a surprisingly small number of providers for critical infrastructure. This arrangement is either an impressive achievement of efficient specialisation or a catastrophic concentration of strategic vulnerability depending on whether you’re optimising for cost or resilience. The industry has collectively chosen the former while hoping the latter doesn’t become relevant, which is working adequately until it doesn’t.

Understanding power dynamics in technology requires mapping who controls the bottlenecks. TSMC manufactures most advanced semiconductors, making them indispensable to everyone building cutting-edge systems. AWS, Azure, and Google Cloud host enormous portions of the internet’s infrastructure, making them indispensable to companies that exist primarily as software. Nvidia supplies the majority of AI accelerators, making them indispensable to anyone training large models. These companies are not monopolies in the legal sense because alternatives exist, but they’re dependencies in the practical sense because the alternatives are inferior, more expensive, or available only at scales that don’t help most customers.

The dependency chains are layered and interconnected. Cloud providers depend on chip manufacturers who depend on equipment suppliers who depend on specialised materials companies. Software companies depend on cloud providers who depend on data centre operators who depend on power utilities. Any disruption propagates through these chains in ways that are invisible until they become catastrophic, at which point everyone simultaneously discovers that the industry’s efficiency optimisation created single points of failure that nobody thought to eliminate because they worked fine until they didn’t.

The Patrician would observe that the current arrangement concentrates remarkable power in a few hands, that this power is generally exercised with restraint because alternatives would damage everyone, and that the restraint will continue until incentives change or until someone decides that their short-term advantage from exploiting their position exceeds the long-term costs of destabilising the ecosystem. This is not reassuring but it’s honest about the nature of concentrated power in supposedly distributed systems.

The cloud oligopoly

Three companies control the majority of global cloud infrastructure, which is fewer than the number of fingers on one hand and considerably fewer than the number of companies depending on that infrastructure. Amazon Web Services remains the largest, Microsoft Azure is growing fastest, and Google Cloud is third but substantial. Together they host an enormous fraction of the internet’s services, making them infrastructure that others build upon rather than merely competitors in a market.

The barriers to competing with these providers are formidable. Building global data centre infrastructure requires tens of billions of euros in capital investment. Operating that infrastructure efficiently requires expertise accumulated over years of experience. Achieving economies of scale requires customer bases large enough to amortise fixed costs. Maintaining security and reliability at scale requires engineering capabilities that most organisations cannot replicate. These barriers mean that new cloud providers emerge rarely and succeed even more rarely.

The market structure is stable but potentially fragile. AWS, Azure, and Google Cloud are large enough to be viable independently, invest continually in capacity and capabilities, and benefit from network effects where more customers enable better services which attract more customers. Smaller cloud providers either serve specialised niches or exist in precarious positions where they lack economies of scale that major providers enjoy. The result is an oligopoly that’s stable absent external disruption but not obviously contestable through normal competitive mechanisms.

Customer dependencies on cloud providers are substantial and increasing. Companies that began using cloud services for peripheral workloads have progressively migrated core systems, accumulated technical debt in cloud-specific services, and developed operational expertise around particular providers’ tools. Switching providers is theoretically possible but practically expensive enough that most customers prefer negotiating better terms with existing providers over undertaking multi-year migration projects.

The power this provides cloud providers is tempered by reputational concerns, competitive dynamics, and customer sophistication. Cloud providers generally avoid exploiting their positions egregiously because damaging customer relationships would encourage multi-cloud strategies, regulatory attention, or development of alternatives. The restraint is self-interested rather than altruistic but is nonetheless real and meaningful for customers depending on their infrastructure.

Multi-cloud strategies provide some risk mitigation but are expensive and complicated. Maintaining competency across multiple providers, managing inconsistent interfaces, and replicating infrastructure redundantly costs more than single-provider strategies. Most organisations adopt multi-cloud reactively after experiencing provider outages rather than proactively, and even then the implementation often means using multiple providers for different purposes rather than true redundancy that would allow seamless failover.

The semiconductor chokepoint

TSMC’s dominance in advanced semiconductor manufacturing creates a chokepoint where most cutting-edge chips must pass through a single manufacturer in a geopolitically sensitive location. This concentration is the result of decades of technical excellence, enormous capital investment, and competitors falling behind or withdrawing from the leading edge. It’s also a strategic vulnerability that everyone acknowledges and nobody has quickly solved.

The dependency extends beyond the chips themselves to the entire ecosystem that enables TSMC’s capabilities. ASML provides the extreme ultraviolet lithography machines that are essential for advanced manufacturing and that only ASML makes. Numerous chemicals, materials, and equipment suppliers provide inputs that have few alternatives. The semiconductor supply chain is deeply interconnected with specialised providers who themselves are potential bottlenecks if their supplies are disrupted.

Chip designers like Nvidia, AMD, Apple, and Qualcomm depend on TSMC for manufacturing their most advanced products. These companies design world-class chips but cannot manufacture them because building competitive fabrication capacity would require tens of billions of euros and years of development. They’re dependent on TSMC not just as a supplier but as the only supplier capable of manufacturing their designs at required quality, volume, and cost.

The alternatives to TSMC are limited and mostly inferior. Samsung manufactures advanced chips but with lower yields and less consistent quality. Intel is attempting to become a competitive foundry but currently lags TSMC in process technology. Older process nodes are available from multiple manufacturers but don’t provide the performance and efficiency that cutting-edge applications require. The alternatives are sufficient for some purposes but inadequate for others, making TSMC’s position secure for advanced manufacturing.

Geographic concentration in Taiwan compounds the dependency because geopolitical tensions create risk that purely commercial dependencies don’t. If Taiwan’s situation deteriorates through conflict, natural disaster, or other disruption, the global technology supply would face crisis of unprecedented severity. Everyone understands this risk, governments are subsidising domestic semiconductor capacity to mitigate it, but the mitigation will take years to decades and won’t eliminate TSMC’s dominance for advanced manufacturing.

The power TSMC wields is constrained by their need for continued customer relationships, their position in a geopolitically contested region that limits how aggressively they can exploit market power, and their own dependencies on equipment and materials suppliers. They’re powerful but not unconstrained, which provides some comfort to their customers while not eliminating the fundamental dependency.

Nvidia’s AI infrastructure dominance

Nvidia has achieved a remarkable position where they supply the majority of AI accelerators, the software stack that runs on them, and increasingly the networking that connects them. This vertical integration means that building large-scale AI infrastructure increasingly means buying Nvidia products throughout the stack, which gives them extraordinary influence over AI development trajectories.

The technical advantages are real and substantial. Nvidia’s GPUs perform AI workloads faster and more efficiently than alternatives. CUDA, their programming framework, has accumulated years of development and optimisation. Their networking products are designed specifically for AI cluster requirements. The products work well together because they’re designed as a system rather than independent components, which provides genuine value to customers beyond just vendor lock-in.

The dependency this creates is problematic for customers because alternatives are limited and mostly inferior. AMD produces capable GPUs but with less mature software ecosystems and smaller installed bases that mean less community support. Google’s TPUs are competitive for some workloads but only available through Google Cloud, which constrains customer flexibility. Intel is developing AI accelerators but currently trails market leaders. Custom silicon from companies like Amazon provides some independence but requires expertise most organisations lack.

The market structure means that Nvidia can charge substantial premiums, allocate limited supply according to their priorities, and influence industry directions through their technology choices. They’re not monopolists because alternatives exist, but they’re close enough to make customers uncomfortable while not uncomfortable enough to justify the costs and risks of switching to inferior alternatives.

The H100 GPU supply situation illustrated Nvidia’s market power vividly. When demand exceeded supply, customers competed for allocation rather than Nvidia competing for customers. Lead times stretched to months, prices increased, and customers with allocation were in enviable positions. This seller’s market is temporary because competitors are developing alternatives and because supply will eventually meet demand, but it demonstrates the power that comes from controlling critical bottlenecks during periods of scarcity.

Nvidia’s position is defendable through continued technical innovation, ecosystem investment, and customer relationships. They’re not resting on current dominance but actively developing next-generation products, expanding software capabilities, and building deeper integrations with cloud providers and AI companies. Whether this maintains their dominant position or whether competitors eventually erode it depends on technical progress, market developments, and whether customers prioritise flexibility enough to accept some performance disadvantages for reduced dependency.

Open source as power distribution

Open source software provides counterweight to commercial dependencies by creating shared infrastructure that no single company controls. Linux underpins cloud computing, Apache and Nginx power web infrastructure, and Kubernetes orchestrates container deployments across the industry. This shared infrastructure reduces dependency on any single vendor and provides alternatives to proprietary systems.

The paradox is that open source is increasingly controlled by the same large companies that it theoretically provides alternatives to. Google, Microsoft, Amazon, and Meta are among the largest contributors to major open source projects. They employ core maintainers, fund development, and influence project directions through their contributions and governance participation. This provides them influence over supposedly independent projects, though the influence is moderated by community governance and the ability of others to fork projects if they disagree with directions.

Open source AI models from Meta’s Llama, Mistral, and others provide alternatives to proprietary foundation models from OpenAI, Google, and Anthropic. These open models enable applications without dependency on API providers, allow customisation that proprietary models don’t permit, and provide fallback options if proprietary models become unavailable or unaffordable. The open models are generally less capable than frontier proprietary models but adequate for many applications and improving rapidly.

The sustainability of open source development is questionable when projects require substantial ongoing investment. Volunteer contributions work for mature stable projects but struggle to sustain rapid development that commercial AI competition requires. Companies sponsoring open source projects do so because it serves their strategic interests, which is fine until interests change. The projects are open but their practical viability often depends on continued corporate sponsorship that might not be permanent.

Open source licensing debates create tensions between making software freely available and preventing cloud providers from building competing services using that software. Companies creating open source software increasingly use licenses restricting commercial use or requiring source distribution for services built with the software. These restrictions preserve some value capture for creators while limiting the freedom that originally characterised open source. The debates reflect tensions between sustainability and openness that don’t have obvious resolutions.

Data as power and dependency

Data has become infrastructure that companies accumulate and control, creating dependencies for others who need access to that data for their applications. Google’s search index, Meta’s social graph, Amazon’s marketplace data, and similar datasets are valuable both for the companies’ own uses and as potential foundations for others’ services that the data owners can permit or deny.

The accumulation advantages are substantial. Companies with large user bases collect more data, which enables better services, which attract more users, which enables more data collection in a self-reinforcing cycle. This creates natural monopolies or oligopolies in areas where data effects are strong, because new entrants cannot compete without equivalent data that they cannot obtain without equivalent user bases that they cannot attract without competitive services that they cannot build without data.

Data portability requirements from GDPR and similar regulations provide some counterweight by allowing users to extract their data and move to competitors. The practical value is limited because data portability provides individual data but not the aggregate data needed for many services. A social network user can export their connections but cannot export the implicit data from everyone’s collective behaviour that makes recommendation algorithms work. The portability is real but insufficient for enabling meaningful competition in many cases.

Proprietary datasets like training data for AI models create dependencies where companies that compiled large clean datasets have advantages over competitors who must replicate that work. Web scraping, data licensing, and other acquisition methods are available but expensive and legally uncertain. Some companies have accumulated datasets that cannot be readily replicated because they came from access that no longer exists or relationships that cannot be reproduced.

Synthetic data and data augmentation provide partial alternatives to accumulating massive natural datasets. These techniques generate training data programmatically or enhance existing data to increase effective dataset sizes. They’re useful but not perfect substitutes for large diverse natural datasets, which means data accumulation advantages persist even as synthetic data reduces their magnitude.

The power of data control is constrained by privacy regulations, competitive concerns, and the decreasing marginal value of additional data. Regulations limit what can be collected and how it can be used. Competition pressure makes hoarding data sometimes less valuable than enabling ecosystem development through data sharing. And past certain scales, additional data provides diminishing improvements to models and services. These constraints are real but don’t eliminate the advantages that data accumulation provides to incumbents.

Infrastructure fragility and resilience

The interconnected dependencies create fragility where disruptions propagate through supply chains, technology stacks, and service dependencies in ways that are difficult to predict and expensive to mitigate. The efficiency optimisation that created these dependencies assumed stable conditions that may not persist.

Single points of failure exist throughout the technology stack despite efforts to build redundancy. A major cloud region outage disrupts numerous services simultaneously. Semiconductor manufacturing disruption affects everything downstream. A critical open source project with few maintainers becomes a bottleneck when maintenance lapses. These single points are known but often accepted because redundancy is expensive and failures are rare enough that the efficiency gains justify the risks until major incidents provide expensive reminders about fragility.

Cascading failures demonstrate how dependencies amplify disruptions. A DNS provider failure disrupts websites relying on their service, which disrupts payment processing, which disrupts e-commerce, which disrupts logistics. The original failure was modest but the cascade affected far more systems than the initial incident because of interconnected dependencies. These cascades are inherent to complex systems and cannot be eliminated entirely through better design or redundancy.

Geographic concentration risks are substantial. Semiconductor manufacturing concentrated in Taiwan, cloud data centres concentrated in specific regions, and internet infrastructure concentrated in major peering locations create geographic dependencies that natural disasters, geopolitical conflicts, or infrastructure failures can disrupt simultaneously. Diversification is expensive and incomplete because some concentrations are driven by fundamental economic or technical constraints rather than mere oversight.

The build-versus-buy trade-offs affect dependency levels. Building internal capabilities provides independence but requires resources most organisations lack. Buying services provides efficiency but creates dependencies on providers. Most organisations buy rather than build because the economic advantages are overwhelming, which creates the current dependency structure. Changing this would require either accepting substantially higher costs or believing that independence is worth those costs, which most organisations conclude it isn’t until experiencing dependency failures.

Resilience strategies include multi-provider approaches, extensive monitoring, incident planning, and maintaining relationships with alternative suppliers. These strategies help but cannot eliminate fundamental dependencies without sacrificing the efficiency gains that created them. The realistic approach is managing dependencies through awareness, contingency planning, and relationships that enable rapid switching when necessary while accepting that some dependencies are inevitable in complex technical systems.

The redistribution question

Whether current power concentrations persist or get redistributed depends on technological developments, competitive dynamics, regulatory interventions, and whether concentrated power is exercised in ways that motivate countermeasures. The current structure is stable but not immutable.

Technological disruption could redistribute power if new approaches enable alternatives to current dependencies. Quantum computing might eventually disrupt classical computing dominance if it achieves practical quantum advantage. Novel architectures might challenge current chip designs if they provide sufficient performance advantages. Decentralised technologies might reduce dependence on centralised cloud providers if they mature beyond current limitations. These disruptions are possible but not guaranteed and would take years to materialise even if technically successful.

Regulatory intervention could force power redistribution through antitrust action, interoperability requirements, or structural separation. Governments uncomfortable with current concentrations have tools to compel changes, though using those tools creates its own challenges and unintended consequences. The regulatory trajectory suggests increased intervention is likely, but whether that intervention successfully redistributes power versus merely constraining its exercise is uncertain.

Market dynamics might naturally redistribute power if competitors develop superior alternatives or if customers coordinate to demand better terms. The history of technology markets suggests that dominant positions are eventually challenged, though the timeframes vary enormously and current dominance often persists longer than observers expect. The market might eventually rebalance through competition, but it might not happen quickly enough to help current stakeholders.

Customer sophistication and coordination could modify power dynamics by negotiating collectively, developing multi-provider strategies, or investing in alternatives. Large customers have leverage individually, but most organisations are small relative to their providers and lack individual negotiating power. Collective action is difficult to organise and sustain but could potentially balance concentrated provider power if customers cooperated systematically.

The realistic outlook is that current power concentrations will persist for years with gradual evolution rather than rapid redistribution. The dependencies are economically rational, technically justified, and difficult to eliminate without sacrificing efficiency gains they enable. Change will come through accumulation of competitive pressure, regulatory intervention, and technological alternatives rather than through sudden disruption. The power will remain concentrated but possibly exercised with more restraint as providers balance exploiting their positions against creating pressures for regulatory intervention or customer defection.

Understanding power and dependencies in technology requires recognising that efficiency optimisation has created concentrated control over critical infrastructure, that these concentrations create both value and vulnerability, and that unwinding them would be expensive enough that most stakeholders prefer managing the concentrations over eliminating them. The current arrangements work adequately for most purposes most of the time, which ensures their persistence until major failures provide motivation for expensive restructuring that organisations would otherwise prefer to avoid. This is how infrastructure dependencies typically evolve, through long periods of accumulated concentration punctuated by occasional crises that motivate temporary enthusiasm for resilience before efficiency concerns reassert and the concentration resumes.