Study watch: China’s AI Regulations and How They Get Made
A review of a new report by by Matt Sheehan, a fellow at the Carnegie Endowment for International Peace

A phenomenon often discussed in policy circles is the ‘Brussels effect’, which refers to the European Union’s ability to influence how firms operate globally by changing laws at home. The effect, which was introduced by Columbia University professor Anu Bradford, proposes that companies often find it easier to comply with EU regulations internationally rather than maintain different standards for different markets. Regulations like the General Data Protection Regulation, for example, can become the default rules companies follow around the world, even when operating outside the EU's jurisdiction.
Changing behaviours influence regulation. For AI, this dynamic has led some to consider whether the Brussels effect could lead to stricter AI regulation globally through the European Union’s upcoming AI Act. But, as you might have noticed from the title, this post isn’t about the EU. It’s about China. While commentators have connected a handful of moves in China with legislative responses in Europe––for example, the introduction of Chinese social scoring systems and provisions in the EU’s AI Act seeking to ban analogous practices––the impact of Chinese AI governance on global norms is understudied.
A new paper by Matt Sheehan, a fellow at the Carnegie Endowment for International Peace, argues that international discourse on Chinese AI governance often fails to take Chinese regulations seriously. Sheehan suggests that Chinese regulations are dismissed either as irrelevant (because the government may be able to ignore them) or used as political fodder (because global regulation has been referenced by U.S. lawmakers as an arena in which to engage). My own view here is that it is difficult to make the case that regulations are both not being taken seriously while also provoking responses from lawmakers outside of China, though I do agree with the author that these responses are generally more about the idea of regulation rather than its substance. As for understanding and responding to the ins-and-outs of the regulation, Sheehan writes:
The specific requirements and restrictions they impose on China’s AI products matter. They will reshape how the technology is built and deployed in the country, and their effects will not stop at its borders. They will ripple out internationally as the default settings for Chinese technology exports. They will influence everything from the content controls on language models in Indonesia to the safety features of autonomous vehicles in Europe.
Sheehan’s paper is the first of a three part series that seeks to understand the terminology, concepts, and requirements embedded in regulation, while unpacking the contingencies on which those components are based. Subsequent papers will examine the political and social roots of the ideas and their entanglement with ideology, the international AI discourse, and Chinese research and companies. The first paper, which is the subject of this review, offers an overview of AI regulations and introduces the primary actors and influences in the policy development process.
An overview of Chinese AI governance
The paper begins by focusing on national-level Chinese policy documents that explicitly and primarily target AI or algorithms for regulation or governance. Noting the difficulties in defining both ‘AI’ and governance, it proceeds by excluding laws like the Personal Information Protection Law that influence AI without directly regulating it. Using these guardrails, the analysis identifies nine major Chinese AI governance policy documents from 2017 to 2023.
2017: New Generation AI Development Plan - Laid out high-level timetable for AI governance regulations through 2030.
2019: Governance Principles for New Generation AI - Issued by expert committee under Ministry of Science and Technology (MOST), outlined 8 principles for AI governance.
2020: Outline for Establishing Rule-of-Law-Based Society - document calling for measures on recommendation algorithms and deepfakes.
2021: Guiding Opinions on Algorithm Governance - General guidance on regulating online algorithms through 2024.
2021: Ethical Norms for New Generation AI - MOST expert committee outlined norms like human control over AI.
2021: Recommendation Algorithm Regulation - First major binding algorithm regulation motivated by online information control. Created algorithm registry.
2022: Technology Ethics Governance Opinions - CCP Central Committee document based on MOST draft focused on ethics governance mechanisms.
2022: Deep Synthesis Regulation - Targets AI generation of text, images, video, requires labelling. Motivated by deepfakes.
2023: Draft Generative AI Measures - Updated regulation covering similar ground as deep synthesis, but focused on generative AI like ChatGPT.
Note: Please refer to the excellent summary on pages 10 and 11 in the paper for details of each regulation, including original Chinese text and English translations, the bureaucratic bodies involved, and notes on provisions.
Of these, the paper highlights the 2021 recommendation algorithm, 2022 deep synthesis, and 2023 generative AI regulations as most important for in-depth analysis. With respect to the first regulation, Provisions on the Management of Algorithmic Recommendations in Internet Information Services, the author writes that it was originally developed in response to concerns over algorithms controlling the flow of online information in China. Sheehan’s analysis goes on to trace the origin of the “algorithmic recommendation” (算法推荐) to a 2017 backlash against ByteDance before arguing that a central aim of the policy is to encourage algorithms to promote particular values in order to help the state “set the agenda of public discourse.”
The regulation was formed amidst “public outcry over the role algorithms play in creating exploitative and dangerous work conditions for delivery workers,” which the author states will be discussed in more detail in the second paper of the series. The section concludes by noting that the legislation bars excessive price discrimination and anti-competitive practices by algorithms, gives users rights like turning off recommendations, deleting tags, and receiving explanations, and enabled the creation of an ‘algorithm registry’ for developers to submit information on training and deployment (including information about training data).
Next, Sheehan discusses China’s first-of-its-kind regulation focused on “deep synthesis technologies”. The regulation requires digital service providers, such as Weibo or Baidu Search, to authenticate user identities and add a “conspicuous label in a reasonable position” to indicate AI-generated content, while also prohibiting the generation of fake news. As Sheehan explains:
The deep synthesis regulation was scoped to include the use of algorithms to synthetically generate or alter content online, including voice, text, image, and video content. It requires that deep synthesis content conform to information controls, that it is labeled as synthetically generated, and that providers take steps to mitigate misuse. The regulation includes a number of vague censorship requirements, such as that deep synthesis content “adhere to the correct political direction,” not “disturb economic and social order,” and not be used to generate fake news. When such content “might cause confusion or mislead the public,” it must include a “conspicuous label in a reasonable position” to alert the public that it was synthetically generated. The regulation also includes a number of provisions targeting misuse, such as requiring that deep synthesis users register with their real names and that platforms prompt users to obtain the consent of anyone whose personal information is being edited. Finally, it requires that deep synthesis providers make a filing to the algorithm registry.
My own perspective is that, compared with the US’s and EU’s approaches, the regulation signals a style of AI governance that is more explicitly concerned with maintaining social stability. However, China’s AI governance regime also consists of many entities across different levels complementing and competing with each other, and most share similarities with western entities and face similar challenges.
For example, similar to the EU's AI Act, China’s deep synthesis regulation tries to allocate responsibilities to different entities across the AI value chain, indicating that while “service providers” shoulder the main burden of managing users and content, “technology supporters” also have a duty to perform safety assessments and support government inspections. Similarly, Article 17 requires “noticeable marking” of AI-generated content but does stipulate how this would be achieved e.g. via watermarking or other provenance verification services.
The author then turns to the 2023 Measures for the Management of Generative Artificial Intelligence Services, which was drafted in response to the release of ChatGPT and other popular generative models. In addition, the regulation seeks to fill gaps in the deep synthesis regulation, whose mandate works well for mediating visual or audio content but is less suited to addressing concerns around text generated by large language models. Currently existing as draft ready to receive public comment, the legislation reinforces content mandates and algorithm registry requirements, stipulates that training data be accurate, objective and diverse, and requires generated content be "true and accurate". The last two of these provisions, the author notes, might prove to be impossible to fulfil given the scale of training data involved and the existence of the ‘hallucination’ problem whereby models produce responses with no factual grounding.
Structural similarities in China's AI regulations
Sheehan argues that China’s AI regulations share three structural similarities. These are the choice of algorithms as a primary focus for regulators, the building of regulatory tools and bureaucratic expertise, and the ‘vertical and iterative’ approach that seeks to lay the groundwork for a broader AI law.
Beginning with algorithms as a point of entry, the paper suggests that “China’s approach to AI governance has been uniquely focused on algorithms” as opposed to regulating training data, compute or the ultimate actions taken or enabled by an AI product. This choice, Sheehan argues, is “clearly displayed in Chinese policy discourse around regulations and the decision to make algorithms the fundamental unit for transparency and disclosure via the algorithm registry.” In practice, this means that the algorithm registry requires separate filings for each algorithm in an app or service such as personalised recommendations or content filtering. While the paper suggests that the approach reveals a belief that effective regulation requires prioritising understanding and intervening at the algorithm level, it also acknowledges that other approaches exist such as those focused on training data (as in the generative AI legislation) or specific outcomes (such as the reflection of certain values, which is present in several of the new laws).
The next area, building regulatory tools and bureaucratic know-how, proposes that China’s approach to AI governance seeks to build a bank of knowledge that can be deployed in future regulations. The central proof point for this claim is the existence of the algorithm registry, which, the paper suggests, exists as a “standardized disclosure tool that ministries can easily include in future regulations, refining its requirements as needed.” Similarly, the author notes that Chinese officials reportedly had little knowledge of AI when initially meeting with the firms that they were seeking to regulate. I view this dynamic as similar in nature to the approach seen in the US, UK and EU. AI is after all a rapidly evolving technology that requires significant time to understand, a fact as true of regulators as it is for industry watchers, civil society or the public. Similarly, the nature of public policy (and indeed most intellectual activity) is such that experience gained working on one effort will be useful at the point at which work begins on a similar project.
The final similarity is described as the “vertical and iterative” nature of Chinese regulation. In this context, ‘vertical’ refers to regulations that target a specific area or instance of a particular technology and is contrasted with the European Union’s AI Act, which aims to provide a web of requirements covering all applications of a given technology. China’s AI regulations are described as “relatively vertical” in that each law covers a number of related applications and imposes requirements specific to these concerns. As for their ‘iterative’ nature, the regulations are described as such because “if the government deems a regulation it has issued to be flawed or insufficient, it will simply release a new one that plugs holes or expands the scope, as it did with the generative AI draft regulation expanding on the deep synthesis measures.” The section closes, however, with the observation that in June 2023, China’s State Council announced that it would begin preparations on a draft Artificial Intelligence Law (人工智能法) to be submitted to the National People’s Congress.
Motivations and policy-setting in practice
Before wrapping up, I will briefly summarise two final areas that the paper considers: the core motivations behind China’s AI governance efforts, and reflections on the process by which China sets its AI policies.
As for the former, the author outlines four primary goals driving China's AI regulations. These are controlling information and technology in service of the state, mitigating economic, social, and ethical harms, advancing China's position in AI development and application relative to its global peers, and establishing China as a leader in AI governance. While the first three are deemed to be important goals, the analysis proposes that the fourth objective––global leadership in governance––ought to be considered a ‘nice-to-have’ rather than a core motivation.
With respect to setting policy, Sheehan introduces a four layer conceptual model for how China formulates AI regulations. The model begins with the ‘real world roots’ or the economic, political, social and technological conditions that determine the ultimate shape of the action space in which policymaking can take place. The paper’s analysis notes that exogenous shifts can provide impetus for new regulations, which in this instance involved ChatGPT's release and the draft generative AI measures. Second, from an ideological perspective, filters and constraints are applied in line with the state’s long term priorities in order to narrow the action space further. Third, the ‘world of ideas’ in which problems and solutions are debated by think tanks, companies, scholars, media, which provides and conditions ideas within additional ideological constraints. Fourth and finally, the ‘party and state bureaucracies’, which seeks ministries and agencies like Cyberspace Administration of China formalise regulations while competing to get preferred policies adopted higher up the chain.
The paper concludes by reminding us that Chinese AI governance is ready to transition from fragmented, application-specific regulations to a comprehensive national AI law. This trajectory, the author argues, mirrors the evolution of the country's internet regulations, which culminated in the landmark Cybersecurity Law of 2017. While the timeline for the national AI law remains uncertain, a draft could be published between late 2023 and 2024. By Sheehan’s account, this piece of legislation will be shaped by a complex of influences and will set the tone for AI regulation around the world. While a ‘China effect’ in AI governance looks unlikely, the paper makes a compelling case for those interested in AI policy to spend time understanding the Chinese regulatory landscape.
Do you think that the U.S.'s approach which clearly is not about maintaining "social stability" and perhaps even has the potential of under-regulating Generative A.I. in the U.S. might actually have depolarizing impacts on things like the health of democracy, social media, misinformation and some aspects of job displacements in Capitalism?
It appears like Europe, China and the U.S. have fundamentally different approaches regarding risk mitigation in this case. How do you expect that to play out in the near future?