Regulatory stirrings

The Patrician has observed that Ankh-Morpork’s response to any sufficiently successful activity is to form a committee, draft some statutes, and appoint inspectors who arrive three years after the problem has moved to an adjacent building. Governments have recently applied this method to the technology industry, which spent a decade doing quite a lot of interesting things while everyone responsible for oversight was busy with something else.

The regulatory response is emerging across jurisdictions with varying degrees of coherence, urgency, and understanding of what is actually being regulated. Europe is writing comprehensive frameworks addressing everything simultaneously. The United States is applying old laws to new situations and hoping the courts agree. China is implementing regulations that serve both governance and industrial policy, sometimes in the same sentence. Smaller jurisdictions are copying larger ones while hoping nobody notices.

Companies that built empires on regulatory ambiguity are discovering that ambiguity eventually resolves, though not always favourably. The resulting negotiations between regulators playing catch-up and companies preferring the previous arrangement are producing regulations that will either meaningfully constrain technology companies or provide symbolic victories while leaving fundamental practices largely intact. It is not yet clear which.

The European approach

Europe has adopted maximalist regulatory ambitions through GDPR, the Digital Markets Act, the Digital Services Act, and the AI Act, which collectively represent the most comprehensive attempt to regulate technology through legislation since the invention of the printing press. The AI Act categorises systems by risk and imposes requirements proportional to that risk, which is sensible in theory and requires determining what counts as AI, which is considerably harder than the legislation acknowledges.

The compliance burden will be substantial, particularly for smaller companies. Large companies can absorb regulatory costs as overhead. Startups may find compliance consuming the resources needed to build a product. This potentially advantages established companies over new entrants, which is opposite to the stated objectives and entirely consistent with how regulatory compliance costs have worked throughout recorded history.

The Patrician notes that the extraterritorial reach of European regulation means companies worldwide must often comply with European requirements regardless of where they operate, which either represents laudable global leadership or remarkable regulatory ambition, depending on your jurisdiction and temperament.

The American approach

The United States has largely eschewed comprehensive legislation in favour of applying existing laws to new situations and hoping the courts find this convincing. Antitrust cases against Google, Meta, Amazon, and Apple are challenging practices that were previously ignored on the grounds that technology is different and should be left alone. Whether this produces meaningful structural change or merely expensive legal proceedings will be determined by courts grappling with competition law written for industries that sold physical objects.

AI regulation at the federal level remains largely absent beyond sector-specific rules and executive orders that express firm intentions without creating binding obligations. Several states are filling this gap with their own requirements, creating compliance complexity that companies manage by complying with the strictest requirements everywhere, which is itself a form of regulatory power California exercises without quite admitting to.

Privacy, content, and everything else

GDPR has become de facto global standard, partly through the logic of its requirements and partly because companies found it easier to comply globally than to maintain different practices by jurisdiction. The right to explanation for automated decisions is creating tensions with AI systems that cannot straightforwardly explain themselves, which is either a problem for the AI or a problem for the requirement, and both sides are arguing this point with considerable energy.

Content moderation has no good solution. Platforms are simultaneously accused of allowing too much harmful content and of removing too much legitimate content, by different people, often about the same content. Regulatory frameworks require better moderation while free speech advocates require less of it. The platforms are implementing systems that will satisfy neither constituency completely, which is probably the best available outcome.

The Patrician’s assessment

Regulations address yesterday’s technology. By the time comprehensive AI regulation arrives, it will be governing capabilities that current frontier systems have surpassed. This is frustrating for everyone except companies benefiting from the regulatory lag, who find it quite satisfactory.

The Patrician concludes that more regulation is coming, that some of it will be well-designed and some will be poorly designed, and that companies should engage with regulatory processes rather than assuming regulations written by people who have heard about AI but not used it will produce sensible requirements without industry input. He does not suggest that industry input produces unbiased results. He merely suggests it produces better-informed results.

The regulatory environment has shifted from permissive to restrictive and shows no sign of reversing. This is probably appropriate. Whether it achieves its objectives without creating worse problems will be answered over coming years through the interaction between regulatory requirements and the industry’s well-funded response. The Patrician is watching this interaction with the attention it deserves and making notes.