
China is exporting its AI governance model; democracies must act now or risk letting others define the future of speech.
By Jordi Calvet-Bademunt and Jacob Mchangama
“China is going to win the AI race,” Nvidia CEO Jensen Huang warned recently. In a follow-up statement released shortly after making that bold proclamation, Huang softened his tone, noting that China was only “nanoseconds behind America in AI.”
Whether apocalyptic or cautious, Huang’s statements echoed across Silicon Valley and Washington, prompting the following motivation for artificial intelligence (AI) investors and policymakers: The United States must outpace China in the AI race by attracting talent, securing computing power, and shaping the global AI stack.
While headlines fixate on who will build the most powerful models, China has already surged ahead in a different and consequential contest: the race to shape global AI governance. On this front, Beijing’s agenda raises significant concerns about freedom of expression around the world. This push echoes China’s years–long effort to influence international technical standards and promote authoritarian internet governance objectives.
On July 26, China released its Global AI Governance Action Plan, an ambitious road map that—in tandem with its other proposals such as the World Artificial Intelligence Cooperation Organization—seeks to position Beijing as the architect of international AI rules. On the surface, the plan’s language sounds well-intentioned. According to Beijing, AI should be a “public good for the international community,” governed in the name of “safety” and shared benefit.
However, democracies have many reasons to be wary of China’s efforts to shape global AI governance. China already maintains a system of strict online censorship through its Great Firewall, and those efforts are being replicated in the AI domain by constructing a regime of anticipatory censorship and subordinating information technologies to state ideology. One only needs to use DeepSeek, a leading Chinese model, to experience firsthand its unwillingness to engage with topics deemed sensitive by the state.
The regulator argued such behavior could violate the EU’s new AI Act, which requires powerful models to mitigate ill-defined “systemic risks,” including “negative effects on … society as a whole.” It referred the matter to the European Commission, which can impose fines of up to 7 percent of global annual turnover for non-compliance.
In July 2025, Poland’s government reported xAI to the European Commission after X’s chatbot Grok generated antisemitic content and offensive comments about Polish Prime Minister Donald Tusk. A spokesperson for the Commission told journalists, “We are taking these potential issues very seriously … we are in touch with the national authorities and with X itself.”