On September 1, 2025, China’s Measures for the Administration of Deep Synthesis Internet Information Services (Labelling Rules), issued by the Cyberspace Administration of China (CAC), entered into force. The official text is available here : CAC – Deep Synthesis Measures.
This framework requires model providers, application developers, and distribution platforms to ensure that any content generated or altered by AI — text, images, music, audio, video, or virtual scenes — is clearly identified.
Two cumulative duties apply:
Liability extends across the entire chain, with the CAC retaining oversight and enforcement powers. No exceptions are foreseen, not even for artistic or satirical content. Transparency is framed as an absolute principle of digital loyalty.
By contrast, the EU AI Act, adopted in 2024 after negotiations starting in 2021, provides a narrower regime under Article 50: users must be informed when interacting with AI systems or accessing synthetic content, notably deepfakes. Exceptions exist (artistic, satirical, or where the artificial nature is obvious). While pioneering at the time, the Act now appears partly outdated compared with China’s more recent and stringent approach.
Both frameworks share the same goal, preserving public trust, but China goes further by imposing systematic and technically traceable labelling.
Special thanks to Landy Jiang and Jenney Zhang of Lusheng Law Firm in Beijing, members of our association ILAAI, for their leading role in global regulatory monitoring on artificial intelligence.