Regulatory stirrings¶
Governments have discovered, with the characteristic promptness of large bureaucratic organisations, that the technology industry has been doing quite a lot of interesting things while nobody was paying particularly close attention. This realisation has prompted the regulatory equivalent of someone discovering that their teenager has been hosting increasingly ambitious parties and deciding that perhaps some house rules should be established before the furniture becomes unrecognisable.
The regulatory response is emerging across multiple jurisdictions with varying degrees of coherence, urgency, and understanding of what they’re actually regulating. Europe is writing comprehensive frameworks that attempt to address everything simultaneously. The United States is pursuing targeted enforcement through existing authorities while Congress discusses legislation that may or may not emerge before the technology landscape changes completely. China is implementing regulations that serve both governance and industrial policy objectives. Smaller jurisdictions are mostly copying larger ones while hoping the regulations won’t disadvantage their domestic industries too severely.
The technology industry’s response combines public cooperation, private objection, extensive lobbying, and careful analysis of which regulations can be complied with cheaply versus which threaten fundamental business models. Companies that built empires on regulatory ambiguity are discovering that ambiguity eventually resolves, though not always in their favour. The resulting negotiations between regulators who are playing catch-up and companies who prefer the previous arrangement are producing regulations that will either meaningfully constrain technology companies or provide symbolic victories for regulators while leaving fundamental practices largely intact.
Meanwhile, the technology keeps advancing faster than regulatory processes can address it, which means that by the time regulations arrive, they’re often addressing yesterday’s concerns while tomorrow’s concerns remain unregulated. This is frustrating for everyone except the companies who benefit from the regulatory lag, who find it quite satisfactory.
The European approach to comprehensive regulation¶
Europe has adopted maximalist regulatory ambitions through frameworks like GDPR for privacy, the Digital Markets Act for competition, the Digital Services Act for content moderation, and the emerging AI Act for artificial intelligence. This represents the most comprehensive attempt to regulate technology through formal legislation rather than relying on existing laws or enforcement discretion.
The AI Act attempts to categorise AI systems by risk level and impose requirements proportional to risk. High-risk systems like those used in healthcare, law enforcement, or critical infrastructure face extensive requirements for transparency, testing, and documentation. Lower-risk systems face lighter requirements. Unacceptable-risk systems like social scoring are prohibited entirely. This risk-based approach is sensible in theory but requires determining which category each AI system falls into, which is considerably harder than the legislation acknowledges.
The definitional challenges are substantial. What constitutes an AI system versus conventional software? Which applications are high-risk versus lower-risk? How do you assess risk for general-purpose AI models that can be applied to anything? The legislation provides frameworks but leaves enormous discretion to regulators and considerable uncertainty for companies attempting compliance. This uncertainty is partly unavoidable given the technology’s breadth but also reflects the difficulty of writing clear rules for capabilities that are still being understood.
The compliance burden for the AI Act will be substantial, particularly for smaller companies lacking dedicated regulatory affairs departments. The requirements for documentation, testing, transparency, and ongoing monitoring are extensive and expensive. Large companies can absorb these costs as overhead, but startups may find compliance consuming resources needed for product development. This potentially advantages established companies over new entrants, which is opposite to the Act’s competition objectives but consistent with how regulatory compliance costs typically affect market structure.
Enforcement remains uncertain because the AI Act is recent and enforcement infrastructure is still being established. The European Commission and national regulators must develop expertise in AI systems, investigation procedures, and enforcement priorities. Early enforcement will set precedents determining how seriously companies take compliance requirements. If enforcement is vigorous and penalties substantial, compliance will be rigorous. If enforcement is sporadic and penalties modest, compliance will be perfunctory. The technology industry is carefully watching early enforcement to calibrate their compliance investments accordingly.
The extraterritorial reach of European regulation means that companies operating globally must often comply with European requirements even for non-European operations because segregating operations by jurisdiction is expensive and complicated. This gives European regulation outsized influence on global technology practices, which is either laudable regulatory leadership or problematic overreach depending on your perspective and jurisdiction.
American enforcement through existing authorities¶
The United States has largely eschewed comprehensive technology legislation in favour of applying existing laws to technology companies and pursuing enforcement through agencies like the Federal Trade Commission and Department of Justice. This pragmatic approach avoids the difficulties of passing legislation through a divided Congress but creates uncertainty about which practices violate existing laws and how aggressively those laws will be enforced.
Antitrust enforcement has intensified with cases against Google, Meta, Amazon, and Apple challenging various practices as anticompetitive. The cases involve search defaults, app store policies, advertising practices, and platform self-preferencing. The enforcement represents a shift from previous decades where technology companies faced minimal antitrust scrutiny despite achieving dominant market positions. Whether the enforcement succeeds in constraining these companies depends on how courts interpret competition law in technology contexts, which remains genuinely uncertain.
The FTC has pursued privacy and data security enforcement through its authority over unfair and deceptive practices. This allows addressing privacy harms without comprehensive privacy legislation but provides less clarity than formal privacy laws about what’s permitted versus prohibited. Companies must infer requirements from enforcement actions and consent decrees rather than following clear statutory rules. This enforcement-first approach is flexible but unpredictable.
AI regulation at the federal level remains largely absent beyond sector-specific rules. The FDA regulates medical AI, financial regulators address algorithmic trading, and various agencies issue guidance about AI in their domains. This fragmented approach means AI regulation varies by application, which is sensible given AI’s breadth but creates complexity for companies deploying AI across multiple sectors. Comprehensive federal AI legislation has been proposed repeatedly but not enacted, which leaves companies navigating patchwork requirements.
State-level regulation is filling the federal gap with varying results. California’s privacy laws, sectoral AI regulations, and technology-focused legislation create de facto national standards because companies often comply nationally rather than maintaining California-specific practices. Other states are enacting their own requirements, which creates compliance complexity when states impose inconsistent obligations. The state-level activity may eventually pressure federal action or may simply create permanent fragmentation depending on whether Congress can agree on federal standards.
The executive branch has issued various AI-related executive orders, agency guidance, and policy frameworks that establish principles without creating binding obligations. These documents signal priorities and shape how existing authorities are exercised but don’t create new regulatory requirements. They’re useful for understanding government thinking but don’t substitute for legislation or regulation with force of law.
Competition concerns and platform power¶
Competition authorities globally are scrutinising technology platforms with renewed vigour after years of relatively permissive oversight. The concerns focus on market power, barriers to entry, self-preferencing, and whether dominant platforms are foreclosing competition in adjacent markets.
Search and advertising markets face scrutiny over whether Google’s dominance in search, achieved through superior product quality, is being maintained through anticompetitive practices like paying for default placement or self-preferencing Google services in search results. The cases turn on whether these practices harm competition or are legitimate business strategies. Courts are grappling with whether competition law should focus on consumer harm, which is difficult to demonstrate when services are free, or broader concerns about market structure and innovation.
App stores face challenges over their policies, commission rates, and restrictions on alternative payment methods. Apple and Google’s control over mobile app distribution and their 15-30 percent commissions on transactions are under regulatory pressure in multiple jurisdictions. The EU’s Digital Markets Act requires allowing alternative app stores and payment methods. Other jurisdictions are considering similar requirements. The platforms argue their policies ensure security and quality while critics argue they’re extracting monopoly rents and foreclosing competition.
Cloud computing markets are being examined for whether the major providers are leveraging their infrastructure dominance into adjacent markets through bundling, preferential pricing, or technical integration that disadvantages competitors. The concerns are that companies dependent on cloud infrastructure must compete with their infrastructure providers, who have structural advantages from controlling the platform. The investigations are early stage but could lead to requirements for neutral platform operation or structural separation.
AI foundation models create new competition concerns about whether control over advanced AI capabilities will be concentrated among a few companies with resources to train frontier models. The barriers to entry from computational requirements and data needs are substantial, which naturally concentrates capabilities. Whether this concentration is problematic depends on whether it forecloses competition in AI applications or whether sufficient competition exists at the foundation model layer. Regulators are still forming views on appropriate interventions if any.
Acquisition policies are tightening with increased scrutiny of technology companies acquiring potential competitors. Regulatory authorities are challenging acquisitions more frequently and imposing conditions more stringently. The shift reflects concern that previous permissive merger review allowed technology giants to eliminate competitive threats through acquisition. Companies now face more extensive review processes and higher likelihood of conditions or outright prohibition for deals involving potential competition.
Privacy and data protection evolution¶
Privacy regulation has evolved from sector-specific rules to comprehensive frameworks attempting to address data collection, use, and protection holistically. GDPR established the template that many jurisdictions have adapted to their contexts, creating global privacy requirements that are broadly similar in structure while varying in details.
GDPR’s requirements for consent, data minimisation, purpose limitation, and individual rights have become de facto global standards because companies operating internationally often comply globally rather than maintaining different practices by jurisdiction. The regulation has been enforced with varying vigour across European member states, with some countries pursuing active enforcement and substantial fines while others have been more restrained. The €20 million or 4 percent of global revenue penalty structure creates potential for substantial fines that companies take seriously.
The right to explanation for automated decisions, including AI systems, creates tensions between privacy rights and AI capabilities. Providing meaningful explanations for complex model decisions is technically challenging and potentially impossible for some models. Companies are developing explanation methods that satisfy regulators while being usable by individuals, which is difficult when explanations must be both technically accurate and comprehensible to non-experts.
Data localisation requirements are emerging in various jurisdictions requiring that data about residents be stored or processed domestically. These requirements serve privacy, sovereignty, and industrial policy objectives but complicate global operations and increase costs. Cloud providers must build regional data centres, companies must segregate data by jurisdiction, and the efficiencies of global data processing are sacrificed for localisation requirements that may or may not meaningfully improve privacy or security.
Biometric data and AI training data are receiving increased regulatory attention. Facial recognition, emotion detection, and other biometric applications face restrictions or outright prohibitions in various contexts. The use of personal data for AI training raises questions about whether existing consent covers training purposes and whether individuals have rights to prevent their data from being used in models. These issues are being litigated and regulated with outcomes that will significantly affect AI development practices.
The evolution continues with new privacy concerns emerging as technology capabilities advance. Location tracking, behavioral advertising, algorithmic personalisation, and data brokers all face increasing regulatory scrutiny. The trend is toward more comprehensive data protection requirements, stricter enforcement, and higher penalties for violations. Companies that built business models on permissive data practices are adapting to a regulatory environment that is substantially less accommodating.
Content moderation and platform responsibility¶
Platforms hosting user-generated content face increasing pressure to moderate harmful content while respecting free expression, which is a balance that nobody has solved satisfactorily and everyone has opinions about. The regulatory responses vary by jurisdiction based on different speech traditions, political contexts, and technological understanding.
The EU’s Digital Services Act imposes content moderation obligations scaled to platform size and risk. Very large platforms face the most extensive requirements including risk assessments, transparency reports, and systems for handling illegal content. The law attempts to balance removing harmful content with preserving legitimate expression, which is conceptually straightforward and practically difficult given the volume of content, ambiguity of many cases, and disagreement about where lines should be drawn.
Section 230 in the United States provides platforms immunity from liability for user content while allowing moderation at their discretion. This framework has enabled platform growth but faces criticism from both sides for either allowing too much harmful content or enabling too much censorship depending on the critic’s perspective. Reform proposals are numerous but consensus is absent, which means Section 230 persists despite widespread dissatisfaction with current arrangements.
Content moderation at scale requires automated systems that make mistakes at rates that are statistically small but absolutely large given content volumes measured in billions. Platforms are improving automated moderation through AI while maintaining human review for unclear cases and appeals. The systems will never achieve perfect accuracy, which means content moderation will permanently involve trade-offs between over-removal and under-removal with different stakeholders preferring different balances.
Transparency requirements are increasing with regulators demanding information about content moderation policies, enforcement rates, and appeals. Platforms are publishing transparency reports with varying detail and candour. The information is useful but limited because platforms control what they disclose and how they frame it. Independent audits and researcher access are being required in some jurisdictions to provide external validation of platform claims.
The geopolitical dimensions complicate content moderation when different jurisdictions have incompatible requirements. Content legal in one jurisdiction may be prohibited in another, requiring platforms to either segregate by jurisdiction, comply with the strictest requirements globally, or exit certain markets. The resulting decisions affect what information is available globally based on local political preferences, which has implications for free expression that are concerning regardless of one’s views on appropriate content moderation.
The implementation gap¶
Regulations are written with assumptions about capabilities, costs, and compliance that often diverge substantially from reality. The gap between regulatory requirements and practical implementation creates friction that’s resolved through negotiation, litigation, and gradual adjustment of requirements or capabilities until they align more closely.
Technical compliance challenges arise when regulations require capabilities that are difficult or impossible with current technology. Explainable AI, perfectly secure data handling, and completely effective content moderation are regulatory goals that technology cannot fully achieve. Companies implement best-effort compliance while regulators gradually calibrate requirements to what’s actually achievable rather than ideally desired.
Cost compliance burdens affect different companies differently. Large companies can absorb regulatory compliance as overhead while small companies may find compliance consuming disproportionate resources. This creates competitive advantages for incumbents and barriers for new entrants, which is often opposite to regulatory objectives but consistent with how compliance costs typically operate. Regulators are aware of this but have limited options beyond exempting small companies or accepting that regulation inevitably favours larger players.
Enforcement capacity constraints mean that regulators cannot possibly monitor all potential violations. They must prioritise enforcement toward the most significant cases or most visible violations. This selective enforcement creates uncertainty about which requirements will be actively enforced versus which exist on paper but rarely trigger action. Companies calibrate compliance investments based on enforcement likelihood, which means under-resourced regulators get partial compliance at best.
The international coordination challenges are substantial when different jurisdictions impose inconsistent requirements. Companies operating globally must either comply with all requirements everywhere, which is expensive and sometimes contradictory, or segregate operations by jurisdiction, which is also expensive and complicated. Some level of international coordination exists through organisations like the OECD, but deep coordination is limited by different political systems, values, and regulatory approaches.
The dynamic nature of technology means regulations are often outdated by the time they’re implemented. Multi-year regulatory processes produce rules addressing the technology landscape as it was when drafting began rather than as it exists when rules take effect. This temporal mismatch is inherent to formal regulation and means regulations are most effective for mature stable technology and least effective for rapidly evolving areas like AI.
What comes next¶
The regulatory trajectory points toward more comprehensive requirements, stricter enforcement, and reduced tolerance for regulatory arbitrage through jurisdiction shopping or definitional ambiguity. The era of permissive technology regulation is concluding as governments assert that technology companies are subject to governance like other industries rather than existing in a special category with minimal oversight.
AI regulation will continue expanding as capabilities advance and as regulators understand implications better. The current frameworks are first attempts that will be revised based on experience and technological developments. Companies should expect regulatory requirements to increase rather than decrease, which means investments in compliance infrastructure are necessary rather than discretionary.
Competition enforcement will remain active with continued scrutiny of mergers, platform practices, and market dominance. The technology industry will face more aggressive antitrust enforcement than in previous decades regardless of which political parties control governments because concern about technology company power spans political divides even if specific concerns differ.
Privacy requirements will become more stringent globally as jurisdictions follow the GDPR model with local variations. Companies should expect data protection requirements to converge toward strict frameworks rather than permissive approaches winning out. Building privacy into products rather than treating it as compliance afterthought is increasingly necessary for avoiding regulatory conflicts.
Content moderation requirements will evolve based on political pressures that vary by jurisdiction and time. Platforms will face continued demands to remove harmful content while also facing accusations of censorship, which is a tension without stable resolution. The requirements will shift with political winds while the fundamental challenges of moderating billions of pieces of content remain constant.
The international fragmentation will persist and likely intensify as different jurisdictions pursue different regulatory approaches reflecting different values and political systems. Companies will adapt through regional compliance strategies, market exits where compliance is uneconomical, and lobbying for international coordination where possible. The global internet will remain global in infrastructure but increasingly fragmented in practices and content availability.
The regulatory environment facing technology companies has shifted from permissive to restrictive and shows no signs of reversing. Companies that thrived under minimal regulation are adapting to substantially more oversight, which affects their business models, growth prospects, and competitive dynamics. Whether this regulation achieves its objectives of protecting consumers, ensuring competition, and managing technology’s societal impacts without stifling innovation is the question that will be answered over the coming years through the interaction between regulatory requirements and industry responses. The answer is unlikely to satisfy everyone but will emerge regardless through the messy process of governments asserting authority over an industry that grew powerful while those governments were distracted by other concerns.