China enforces new AI content identification rules starting today

By setting clear labelling requirements and establishing responsibility across the content ecosystem, China is positioning itself to combat AI misuse and enhance digital content authenticity.

China enforces new AI content identification rules starting today

The ‘Measures for Identifying Artificial Intelligence-Generated Synthetic Content,’ jointly issued by the Cyberspace Administration of China, the Ministry of Industry and Information Technology, the Ministry of Public Security, and the National Radio and Television Administration, officially came into force today, on 1 September. This regulation introduces a standardised system for identifying synthetic content created using artificial intelligence (AI), aiming to promote responsible AI development and safeguard public interest.

What the measures cover

The new measures define AI-generated synthetic content as any text, images, audio, video, or virtual environments created using AI technologies. To ensure transparency, the regulation requires service providers to apply two types of identifiers to such content:

  • Explicit identification: Visual or audible indicators like labels, symbols, or watermarks that are clearly perceivable by users.
  • Implicit identification: Technical markers such as metadata and digital watermarks embedded within the file, invisible to end-users but detectable by systems.

Who must comply

These rules apply to network information service providers that fall under existing AI, algorithm, or deep synthesis regulations. Such entities must:

  • Add explicit labels when content is shared publicly or offered for download.
  • Embed implicit metadata identifying the content’s synthetic nature, provider details, and content ID.
  • Clearly disclose their practices in user agreements and maintain logs of synthetic content distribution.

Responsibilities and enforcement

Online platforms that host or disseminate synthetic content must:

  • Detect whether content includes proper identification.
  • Add warnings for suspected or user-declared synthetic content.
  • Prompt users to voluntarily disclose whether their content includes AI-generated elements.

Platforms distributing AI-enabled apps are also required to verify if those apps use synthesis technologies and confirm they include the necessary identification tools.

Violations of these rules may be penalised under relevant cybersecurity, telecom, and public safety laws. The regulation strictly prohibits tampering with or removing identifiers, or providing tools to do so.

Go to Top