In China, Planning Towards AI Policy Paralysis
How government plans, combined with political tightening, form a barrier to AI governance
This article was published as part of the Stanford-麻豆果冻传媒 DigiChina Project's first special report, AI Policy and China: Realities of State-Led Development. Read this article and the others in PDF here.
As President Xi Jinping and the Chinese Communist Party (CCP) exert more centralized and ideological control over legal institutions, the challenges of AI deployment across multiple industries and throughout society demand flexible and innovative responses. However, while the government鈥檚 top-level plans for AI advancement call for policy adaptations and taking a lead in global regulation, the same plans appear to be shrinking the space for policy innovation even further.
AI, with its broad applications and vague definitions, is proving challenging for legal regimes across the world. In China, the complex dynamics of regulating AI coincide with the CCP鈥檚 increased institutional and ideological control over legal institutions and the private sector. This combination is already having negative impacts on the Chinese legal system, particularly its capacity to respond to and regulate AI, because it affects the capacity of China鈥檚 institutions to develop and govern.
National Ambitions for AI Amidst Increasingly Centralized Governance
When it comes to understanding China鈥檚 bold AI-related declarations and actions, it is important to put them into this institutional context鈥攖o look beyond China鈥檚 stated ambitions into the more nuanced reality of how 鈥淎I鈥 is being described and used within China鈥檚 political and legal institutions.
The Chinese system is defined in part by its political and legal . Provincial and local governments, for example, do not pass legislation but rather 鈥渋mplement鈥 laws passed by the National People鈥檚 Congress and regulations issued by the State Council. The CCP also utilizes a that emphasizes its own singular legitimacy and wisdom to govern China. This ideology does not exist only at the top, but rather spreads throughout the various bureaucratic and legal institutions across China. One routine characteristic of this system is the use of overarching to drive industrial and other important policy goals, and the headline-catching 2017 New Generation Artificial Intelligence Development Plan (AIDP) and its various local iterations continue this longstanding governance model.
The Chinese system in recent years grew even more centralized, as the CCP has worked to entrench its formal powers over , including government agencies and courts, as well as greater Chinese society. For example, a growing number of Party, regulatory, and court documents emphasize the 鈥.鈥 The CCP has also established a number of new Party organizations that are outside the formal state hierarchy and are therefore effectively 鈥渆xtra-legal,鈥 or beyond the control or supervision of law, including the , which reports directly to Xi. The Cyberspace Administration of China, for its part, reports directly to the Party Central Committee.
Government agencies have reportedly responded to increased centralization and ideological control with fear and paralysis. When the correct way forward is unclear, sometimes it seems safer to do nothing at all.
Private companies, particularly tech companies, are also facing increased CCP interference and control. Tactics from buying company shares to requiring the establishment of Party Committees have, in the view of some analysts, allowed the CCP to 鈥溾 private tech companies, transforming them into 鈥溾 enterprises.
This environment of increasing extralegal powers and authority has exacerbated a bureaucratic paradigm that prioritizes political performance and loyalty, even over efficiency. Government agencies have reportedly responded to increased centralization and ideological control with ; when the correct way forward is unclear, sometimes it seems safer to do nothing at all.
China鈥檚 plans and stated ambitions for the future of AI are far from exempt from these trends of centralization and political discipline.
Chinese Governance of AI and Its Effect on Politics and Development
At first glance, China鈥檚 approach to the governance of AI appears similar to other countries. Other , as well as the , have released similar 鈥淎I plans鈥 and documents discussing the importance of ethics and principles when it comes to developing and deploying AI.
The difference is in the institutional details. Currently, the real substance of how AI is being governed across the world is not as much in the grand plans and pronouncements but rather in the particulars of how institutions and individuals affect the role of AI in their lives and communities. It is in this context that AI appears to be revealing and potentially exacerbating shortcomings within China鈥檚 political and legal institutions.
For example, while China鈥檚 2017 AI plan is no more vague than any other national document discussing AI, it signals not only intent, but political control. It is as much an announcement to the world that China will lead in AI as it is to domestic institutions that the Party will rule AI and the future it is to power.
If you combine the Party鈥檚 assertion of control over AI with its tightened ideological control overall, as well as the indeterminate breadth of AI as a concept, such signaling could well exacerbate problems within government institutions. Overemphasis on 鈥controlling鈥 AI and/or 鈥渨inning鈥 the 鈥淎I race鈥 could put further pressure China鈥檚 institutions and reduce their regulatory flexibility.
Old-fashioned bureaucratic in-fighting could also stifle government innovation. There were at least central government agencies involved in drafting the AIDP. At the same time, Chinese government agencies have a history of and competition. Given the complexity of AI as a legal concept and the political impetus to 鈥渨in鈥 at governing it, how are those institutions supposed to cooperate? Assigning responsibility to committees does not automatically lead to institution building.
Recourse to existing rules won鈥檛 cut it, either. There are some laws on the books that govern the use of algorithms. However, many of these laws include idealistic, politically correct language that is to implement. The 2017 Cybersecurity Law, for example, requires that network operators 鈥渞espect social morality鈥 and bear social responsibility.鈥 As the People鈥檚 Daily , such language is having trouble shaping behavior in practice.
There is also a general dearth of regulation in a number of industries in which AI is being deployed. In transportation, for example, there is no law that regulates safety or other key issues related to autonomous vehicles (though there are notices requiring for smart maps in such vehicles and for 鈥溾 cars, which appear to be cars with some internet-accessing features). Local regulations too are lacking (especially compared to the United States, where 40 have enacted legislation and/or executive orders). 鈥淪mart鈥 medical products or mobile medical apps are also largely unregulated so far.
Paths Not Taken and Not Available for AI Governance in China
It is of course impossible to definitely say why there is a dearth of laws. It is possible, however, that the complexity of AI requires regulatory flexibility and institution building and that, currently, the CCP is placing heavy emphasis on increased bureaucratic and ideological control and the expense of flexibility. The desire for more control does not automatically translate into institution . In the current climate of institutional paralysis, government actors might not have incentives to make potentially risky legal innovations, and they might instead continue to stagnate.
The CCP is currently closing legal spaces across the board while simultaneously emphasizing the importance of governing AI. There is some evidence that political signaling in the AI space is taking precedence over realistic institutional creativity.
While local governments in China have a history of innovation in certain contexts, this dynamic appears to be . One unsigned commentary that, while information technology was supposed to make life easier for bureaucrats, it appears to instead have added more hurdles and vectors for political risk. AI, being both complex in a way that requires innovation and politically important in a way that requires signaling, is straining individuals within China鈥檚 bureaucracies.
One route taken in many countries around the world is less open for China. Since legislatures across the world have generally been slow to respond to the advent of AI, civil society organizations have played a large role in the nascent development of AI governance, since they can act spontaneously as a check on both government and private sector power. The AI Now Institute, for instance, has published several on different uses of algorithms and their impact on society. But President Xi has a large crackdown on civil organizations, particularly law-oriented ones. As such, civil society organizations within China lack the ability to fill the governance and public interest gap left by a bureaucracy lagging the development of technology.
Courts within China have recently been institutionally , but innovations generally serve to insulate courts from criticism and political risk rather than increase their authority or capacity to address complex issues. With AI鈥檚 breadth and political import, Chinese courts might not only face unprecedented legal challenges in cases that involve AI, they might also face unprecedented political pressure to avoid any chance of hindering the CCP鈥檚 plans for AI. The courts are still worth watching, however: The Supreme People鈥檚 Court might issue guidance for lower courts in terms of dealing with cases that involve algorithms. If such guidance is issued, it will be important to pay attention to local courts and how they react to cases involving algorithmic decision making.
AI Governance Dilemmas Have Broader Political Effects
The complexity and novelty of governing AI requires space for regulatory flexibility and compliance鈥攖he space to make mistakes and experiment. The CCP is currently closing legal spaces across the board while simultaneously emphasizing the importance of governing AI successfully, and there is some evidence that political signaling in the AI space is taking precedence over realistic institutional creativity.
More national drives for local governments to fund AI winners mean more money out the door, and many local governments in China already face potentially levels of debt. AI projects are far from guaranteed to break this cycle, as many of the government-funded 鈥淎I startups鈥 do not make much use of AI, and many suffer from business models.
To understand China鈥檚 future AI governance and technological development, one must go beyond the stated principles and ambitions and observe the development (or lack thereof) of institutions. There are signs that the CCP recognizes problems of bureaucratic inaction and is looking for ways to improve local capacity. Whether efforts to meet these challenges will prove successful remains an open question.