Artificial intelligence (AI) is both omnipresent and conceptually slippery, making it notoriously hard to regulate. Fortunately for the rest of the world, two major experiments in the design of AI governance are currently playing out in Europe and China. The European Union (EU) is racing to pass its draft Artificial Intelligence Act, a sweeping piece of legislation intended to govern nearly all uses of AI. Meanwhile, China is rolling out a series of regulations targeting specific types of algorithms and AI capabilities. For the host of countries starting their own AI governance initiatives, learning from the successes and failures of these two initial efforts to govern AI will be crucial.
Matt O’Shaughnessy
Matt O’Shaughnessy is a visiting fellow in the Technology and International Affairs Program at the Carnegie Endowment for International Peace, where he applies his technical background in machine learning to research on the geopolitics and global governance of technology.
When policymakers sit down to develop a serious legislative response to AI, the first fundamental question they face is whether to take a more “horizontal” or “vertical” approach. In a horizontal approach, regulators create one comprehensive regulation that covers the many impacts AI can have. In a vertical strategy, policymakers take a bespoke approach, creating different regulations to target different applications or types of AI.
Neither the EU nor China is taking a purely horizontal or vertical approach to governing AI. But the EU’s AI Act leans horizontal and China’s algorithm regulations incline vertically. By digging into these two experiments in AI governance, policymakers can begin to draw out lessons for their own regulatory approaches.
The EU’s Approach
The EU’s approach to AI governance centers on one central piece of legislation. At its core, the AI Act groups AI applications into four risk categories, each of which is governed by a predefined set of regulatory tools. Applications deemed to pose an “unacceptable risk” (such as social scoring and certain types of biometrics) are banned. “High risk” applications that pose a threat to safety or fundamental rights (think law enforcement or hiring procedures) are subject to certain pre- and post-market requirements. Applications seen as “limited risk” (emotion detection and chatbots, for instance) face only transparency requirements. The majority of AI uses are classified as “minimal risk” and subject only to voluntary measures.
Matt Sheehan
Matt Sheehan is a fellow at the Carnegie Endowment for International Peace, where his research focuses on global technology issues, with a specialization in China’s artificial intelligence ecosystem.
The AI Act vaguely defines “essential requirements” for each risk tier, placing different constraints on each category. The easiest way for developers to satisfy these mandates will be for them to adhere to technical standards that are being formulated by European standards-setting bodies. This makes technical standards a key piece of the AI Act: they are where the general provisions described in legislation are translated into precise requirements for AI systems. Once in force, years of work by courts, national regulators, and the technical standards bodies will clarify precisely how the AI Act will apply in different contexts.
In effect, the AI Act uses a single piece of horizontal legislation to fix the broad scope of what applications of AI are to be regulated, while allowing domain- and context-aware bodies like courts, standards bodies, and developers to determine exact parameters and compliance strategies. Furthering its ability to act in more context-specific ways, the EU is also pairing the requirements in the AI Act with co-regulatory strategies such as regulatory sandboxes, an updated liability policy to deal with the challenges of AI, and associated legislation focused on data, market structures, and online platforms.
This framework strikes a balance between the dual imperatives of providing predictability and keeping pace with AI developments. Its risk-based approach allows regulators to slot new application areas into existing risk categories as AI’s uses evolve, providing a balance between flexibility and regulatory certainty. Meanwhile, the AI Act’s definition of relatively flexible essential requirements also alleviates the ke