EU publishes first draft Code of Practice on transparency for AI-generated content

The European Commission has released the first draft Code of Practice on Transparency of AI-Generated Content. The document sets out how providers and deployers of generative AI systems should mark, detect, and label AI-generated or manipulated content under Article 50 of the AI Act.

EU publishes first draft Code of Practice on transparency for AI-generated content

The European Commission’s first draft Code of Practice on Transparency of AI-Generated Content sets out an emerging framework for how AI developers and deployers should meet new transparency obligations under Article 50 of the EU AI Act. The draft, released for consultation and developed by two expert working groups, serves as a foundation for later versions and aims to clarify how AI-generated and manipulated content should be marked, detected, and disclosed. The full draft illustrates a wide-ranging effort to balance legal obligations, technical feasibility, and public-interest concerns, drawing on more than 180 stakeholder submissions and workshop discussions.

The working groups emphasise that today’s generative systems can produce large volumes of synthetic text, audio, image and video content that closely resemble human-created materials. This makes it harder for the public to distinguish between authentic and synthetic content. The draft argues that transparency is essential to maintain trust in information ecosystems and to mitigate risks of deception, impersonation, and manipulation. Because no single technical method is sufficient at present, the draft adopts a multi-layered approach to marking AI outputs. This includes metadata, imperceptible watermarks, and, where necessary, fingerprinting or logging systems. Providers of AI models are also expected to implement markings at model level to support compliance down the value chain.

To complement these markings, providers must also offer detection tools such as interfaces or APIs that allow users, platforms, and third parties to verify whether specific content was generated by their systems. The draft also encourages collaboration on shared forensic detectors that work even when active markings are absent, and expects providers to give accessible explanations of detection results and support public literacy around AI provenance.

Alongside technical measures, the draft requires providers to establish compliance frameworks, conduct testing and monitoring aligned with state-of-the-art benchmarks, document robustness against adversarial manipulation, train relevant staff, and cooperate with market surveillance authorities.

A second section focuses on obligations for deployers – those who use AI systems to publish or disseminate content. Deployers must label deepfakes and AI-generated or manipulated text intended to inform the public on matters of public interest. A common taxonomy distinguishes fully AI-generated from AI-assisted content, and a common icon is proposed to ensure consistent disclosure across formats. The draft anticipates development of an EU-wide interactive icon that can also provide additional information on what elements of a piece of content were AI-generated or altered.

The draft provides detailed guidance for different media. Video, audio, images, and multimodal content each require context-appropriate placement of labels or disclaimers. For audio-only content, spoken disclosures are required at first exposure. Artistic, creative, satirical and fictional works receive more flexible treatment: disclosures must be visible or audible but non-intrusive, ensuring they do not interfere with the integrity or enjoyment of the work while still alerting users to the presence of AI-generated or manipulated elements.

Deployers must also maintain internal procedures, ensure human oversight in classification decisions, train staff, monitor for mislabelling, and cooperate with authorities and third parties such as media regulators and fact-checking organisations.

The draft acknowledges that important questions remain unresolved, including marking for short text, emerging modalities such as agentic AI and VR environments, audio-only labelling, common icon design, and technical specifications for interactive disclosures. Stakeholders are invited to submit further input by 23 January 2026. The next version will integrate this feedback and reflect evolving technical and regulatory developments.

Go to Top