Home > News > New Requirements for AI-Generated Content: All AI-Created Material Must Disclose Its Origin
New Requirements for AI-Generated Content: All AI-Created Material Must Disclose Its Origin

Effective 1 September, the Measures for the Identification of AI-Generated Synthetic Content (hereinafter referred to as the Identification Measures) formally came into force, stipulating that all AI-generated text, images, videos and other content must be clearly labelled as such. The implementation of the Identification Measures represents a key national initiative to promote the standardised and healthy development of the AI industry. With the proliferation of generative AI, AI-generated content is flooding into platform content pools at an unprecedented rate. While its formidable productivity has yielded refreshingly novel content products, it has also created fertile ground for illegal activities such as rumour-mongering and counterfeit infringement.


Dual Approach to AI Content Labelling: Explicit and Implicit

The Measures define AI-generated synthetic content as information—including text, images, audio, video, and virtual scenes—created or synthesised using artificial intelligence technology. Labelling requirements encompass both explicit and implicit forms.

Explicit labelling requires service providers to incorporate identifiers within the generated content or interactive interface using text, audio, graphics, or other means readily perceptible to users. Examples include stating ‘Generated by AI’ via watermarks, labels, or voice prompts.

Implicit labelling involves embedding identification information within the metadata of content files through technical means. While such labelling may not be readily discernible to ordinary users, it facilitates content traceability and platform oversight.

Furthermore, the Identification Measures refine and harmonise labelling requirements from existing regulations including the Regulations on the Management of Algorithm Recommendations for Internet Information Services, the Regulations on the Management of Deep Synthesis for Internet Information Services, and the Interim Measures for the Management of Generative Artificial Intelligence Services, thereby enhancing systemic coherence.


Platforms must fulfil labelling and dissemination management obligations

The Labelling Measures impose explicit requirements on service providers. Those falling under Article 17(1) of the Regulations on the Management of Deep Synthesis in Internet Information Services must add explicit labels to generated synthetic content. Concurrently, they shall embed implicit labels within file metadata in accordance with Article 16 of the same regulations. Platforms providing content dissemination services must implement technical measures to regulate the propagation of AI-generated content, preventing the spread of false and harmful information.

The Marking Measures also strictly prohibit any organisation or individual from maliciously deleting, altering, forging, or concealing markings, or providing tools and services for such acts. No party may infringe upon others' rights through improper marking practices. This provision establishes a legal basis for combating technological abuse.

To ensure effective implementation of the Identification Measures, multiple supporting technical standards and practical guidelines have been released concurrently. The mandatory national standard ‘Cybersecurity Technology: Identification Methods for AI-Generated Synthetic Content’ has been approved and issued by the State Administration for Market Regulation and the Standardisation Administration of China, to take effect alongside the Identification Measures on 1 September 2025.

‘Legal compliance analysis reveals that these Measures are closely aligned with higher-level legislation such as the Cybersecurity Law and relevant departmental regulations, establishing a clear framework for labelling obligations and enforcement.’ Li Zhanghu, Senior Partner at Shanghai Jintiancheng (Chongqing) Law Firm, contends that for enterprises, the early deployment of explicit and implicit labelling measures alongside refined compliance management processes represents an inevitable response to regulatory requirements. At the individual level, this introduces an additional safeguard for rights protection, enabling more effective prevention against AI-generated content infringement and misinformation. From an industry perspective, the new labelling regulations will steer the market towards healthier competition, spurring technological innovation and service upgrades. While technical implementation poses challenges, supporting standards and guidelines provide direction for compliant implementation.

The National Cybersecurity Standardisation Technical Committee has also released the Cybersecurity Standard Practice Guide: Coding Rules for Artificial Intelligence Generated Synthetic Content Identification Service Providers, offering specific coding guidance for service providers undertaking metadata implicit identification. Moving forward, the committee will progressively introduce recommended standards and practice guides for metadata labelling specifications across different file formats and labelling methodologies for specific application scenarios, thereby establishing a multi-tiered, comprehensive technical standards framework.


Platforms Roll Out Detailed Rules in Response to Labelling Measures

Around 1 September, content platforms including Douyin, Tencent, and Bilibili, alongside AI service providers such as DeepSeek, issued detailed implementation rules for the Identification Measures.

On 1 September, Douyin released its Announcement on Upgrading AI Content Identification Functionality, further regulating the creation and dissemination of AI-generated content on its platform. The announcement indicates Douyin has launched two core functions: firstly, an AI content labelling feature to assist creators in adding prompt labels to AI-generated content, facilitating user identification; secondly, an AI metadata labelling read/write function capable of identifying and writing metadata information to AI content, providing technical support for content traceability.

Should creators fail to proactively add labels, Douyin will assist in appending explanatory markers. On one hand, the platform employs technical means to detect content suspected of being generated using AIGC technology, adding a label stating: ‘Suspected use of AI-generated technology; please exercise caution in discernment.’ Secondly, in accordance with relevant regulations, AIGC technology service providers must embed implicit identifiers within metadata when generating AI content. When creators publish such content on Douyin, if the platform verifies the presence of these implicit metadata identifiers, it will display a label stating ‘This work contains AI-generated content’ on the content page.

DeepSeek stated it has implemented labelling for AI-generated synthetic content within its platform, explicitly alerting users that such material originates from AI. Users must not maliciously remove, alter, forge, or conceal these generation labels, nor utilise AI to produce or disseminate false information, infringing content, or engage in any unlawful activities.

Concurrently, DeepSeek has published the ‘Model Principles and Training Methodology Statement,’ detailing the fundamental principles, training data, and content generation mechanisms of its models. This aims to assist users in comprehensively understanding AI technology, using DeepSeek services appropriately, safeguarding users' rights to information and control, and mitigating risks arising from misuse or improper application.



Experts: Bringing Compliance Pressure Yet Holding Development Opportunities

Zhang Linghan, Professor at the Institute of Data Law at China University of Political Science and Law and Chinese expert on the United Nations High-Level Advisory Body on Artificial Intelligence, noted that the ‘Identification Measures’ represent a shift from concept to formalised regulation. They establish China's first systematic national framework for labelling AI-generated content, marking a pioneering institutional breakthrough from ‘zero to one’. She contends that for AI service providers and information dissemination platforms, the implementation of the Labelling Measures brings both compliance pressures and developmental opportunities.

The pressure manifests in the explicit labelling, review, and management responsibilities they must undertake, which undoubtedly increases technical and operational compliance costs. However, the opportunities are equally significant: firstly, a unified and clear labelling framework avoids redundant investments and adaptation barriers caused by ambiguous rules, substantially enhancing compliance efficiency across the entire industry; secondly, by establishing clear regulatory boundaries, it propels the industry towards a new model of standardised competition, contributing to the maintenance of market order and the sound development of enterprises; Finally, the framework allows enterprises room for adaptation and adjustment. Low-cost identification methods such as textual symbols demonstrate the necessary gradualism and phased approach in the regulatory design, aiming to protect innovative vitality and promote the industry's long-term healthy development.

Another highlight of the Identification Measures is the stipulation of both ‘explicit’ and ‘implicit’ identification methods. Zhang Linghan notes that the dual requirement for explicit and implicit labelling stems from the need to establish a dual-layer trust mechanism that is ‘user-perceptible and machine-recognisable,’ addressing governance demands at different levels.

Within the dissemination chain, explicit labelling (such as textual or audio prompts) enables users to instantly discern ‘what is generated,’ safeguarding their right to know and serving as front-end notification and alert mechanisms. Implicit labelling (such as metadata embedding) operates at the technical and regulatory backend. It embeds structured information—including generation source, model type, and creation time—into the underlying file records, providing reliable technical evidence for post-event tracing, liability determination, and regulatory enforcement.

Zhang Linghan emphasised that refining the content labelling system constitutes a cross-departmental, multi-stakeholder, long-term systemic endeavour. The current phase necessitates a phased implementation approach of ‘establishing before dismantling,’ with regulatory methods continuously refined through iterative dynamic adjustments. Existing measures represent not an endpoint, but rather a foundation upon which future oversight will depend on the synergistic evolution of technology and institutional frameworks.

On one hand, robust, cross-platform compatible technologies such as lightweight digital watermarks, frequency domain embedding, and decentralised identity markers remain to be introduced, with identification dimensions and recognition accuracy requiring further expansion. Concurrently, the regulatory framework itself undergoes continual refinement. Feedback mechanisms established through sandbox trials and similar approaches enable dynamic updates to technical guidelines. The ultimate objective is to achieve a closed-loop governance system where ‘unidentified data is not transmitted’ – a process that will unfold incrementally.