European Commission launches public consultation on AI transparency guidelines and Code of Practice

The Commission is seeking concrete, specific and concise feedback, including real-world use cases and practical examples. The consultation is open until 2 October 2025, and all contributions may be published in anonymised or non-anonymised form depending on the respondent’s preferences.

European Commission launches public consultation on AI transparency guidelines and Code of Practice

The European Commission has launched a targeted public consultation to support the development of guidelines and a Code of Practice on the transparency obligations for AI systems as set out in Article 50 of the AI Act. This initiative, coordinated by the AI Office within DG CNECT, comes ahead of the mandatory application of Article 50 from 2 August 2026, and is aimed at gathering input from a wide array of stakeholders on how to best implement transparency requirements for interactive and generative AI systems, biometric categorisation, emotion recognition, and deepfakes.


The consultation responds to Article 96(1)(d) and Article 50(7) of the AI Act, which task the Commission with preparing practical guidelines and promoting the development of a Code of Practice for transparency. These efforts are essential to ensure people are informed when interacting with AI systems or exposed to their outputs, especially when such systems can impersonate humans, generate synthetic content, or profile individuals based on emotions or biometric data.

The AI Act, which entered into force on 1 August 2024, introduces a harmonised EU framework for trustworthy AI, ensuring both innovation and fundamental rights protection. Transparency is a key principle in this framework, particularly when AI is used in sensitive contexts or produces content that may appear human-made.


Article 50 AI Act: transparency obligations

The consultation focuses on implementing the six key paragraphs of Article 50:

  1. Article 50(1) – AI systems interacting directly with people must inform users they are interacting with AI unless this is obvious. The design should take into account vulnerable groups such as children or persons with disabilities.
  2. Article 50(2) – Providers of generative AI systems must mark synthetic content in a machine-readable format to allow detection and origin tracing, using techniques like metadata, watermarks, cryptography, or fingerprints.
  3. Article 50(3) – Users of biometric categorisation or emotion recognition systems must inform individuals exposed to such systems, unless exempted for law enforcement purposes under certain safeguards.
  4. Article 50(4) – AI-generated deepfake content or AI-generated text intended to inform the public must disclose its artificial nature. Exceptions apply for artistic/satirical works or editorial control.
  5. Article 50(5) – Information under paragraphs 1–4 must be provided clearly and accessibly, at the latest during the first interaction or exposure, and in formats accessible to people with disabilities.
  6. Article 50(6) – These transparency rules are without prejudice to high-risk system requirements or other EU/national transparency laws.

Consultation structure and timeline

The consultation is structured into five thematic sections, each aligned with a different paragraph of Article 50. It includes questions about:

  • Scope and definitions (e.g., what qualifies as “interactive” or “deepfake”)
  • Law enforcement exceptions
  • Technical solutions for marking synthetic content
  • Appropriate design for vulnerable groups
  • Editorial control and freedom of expression
  • Interaction with existing laws like the GDPR or DSA

The Commission is seeking concrete, specific and concise feedback, including real-world use cases and practical examples. The consultation is open until 2 October 2025, and all contributions may be published in anonymised or non-anonymised form depending on the respondent’s preferences.


Who should respond?

The consultation is open to a broad spectrum of stakeholders, including:

  • AI providers and deployers
  • Civil society and consumer organisations
  • Media and cultural actors
  • Academics and technical experts
  • Government bodies and regulators
  • Private companies across all sectors (e.g., healthcare, law enforcement, finance, etc.)
  • General public and persons exposed to AI systems

Outcome

The responses will shape the upcoming EU Code of Practice and guidelines on AI transparency, aiming to support implementation in a way that is technically feasible, rights-respecting, and practically useful. A summary of the aggregated results will be published by the AI Office, ensuring transparency in the rulemaking process.

Go to Top