EU begins work on code for labelling AI-generated content
The code will be developed over seven months by independent experts appointed by the European AI Office. The process began with a kick-off plenary meeting and will involve public input and contributions from stakeholders selected through an open call.
The European Commission has launched the development of a new code of practice focused on marking and labelling AI-generated content, part of efforts to support transparency obligations under the EU AI Act.
Under the AI Act, certain AI-generated material, including deepfakes, synthetic audio, video, images and text, must be clearly identified as machine-produced. The rules are intended to make it easier for people to distinguish between genuine and AI-generated content, addressing concerns about misinformation, fraud and impersonation.
The Commission says the upcoming code will help technology providers and users meet these requirements, offering practical guidance on implementing machine-readable labels and disclosure practices.
Expert-led drafting process
The code will be developed over seven months by independent experts appointed by the European AI Office. The process began with a kick-off plenary meeting and will involve public input and contributions from stakeholders selected through an open call.
While voluntary, the code is expected to serve as a reference for providers of generative and interactive AI systems, particularly those operating in Europe’s regulatory environment.
Preparing for 2026 obligations
Marking obligations for AI-generated content will apply from August 2026, complementing other AI Act transparency and safety requirements for high-risk and general-purpose AI systems.
The Commission notes that the code will also guide organisations that use AI-generated content in public-interest communications, ensuring audiences are informed when content has been created or altered using AI tools.
Wider context
The initiative follows growing public and industry debate on how to verify authenticity in a media environment increasingly populated by synthetic content. Policymakers see consistent, interoperable labelling practices as one way to help build trust while supporting innovation in creative and information sectors.
