European Commission publishes draft guidance on AI Act transparency obligations
The European Commission has released draft guidelines explaining how transparency requirements under Article 50 of the EU AI Act should apply to AI-generated content and user interactions with AI systems.
The European Commission has published draft guidelines clarifying how transparency obligations under Article 50 of the Artificial Intelligence Act should be implemented across the EU.
The guidance focuses on situations where users interact with AI systems or encounter AI-generated content. It is intended to help providers, deployers, and national authorities apply the rules more consistently.
Article 50 forms part of the AI Act’s transparency framework. It requires certain AI systems to disclose when content has been generated or manipulated by AI, or when users are interacting directly with AI tools rather than humans.
The draft guidance was developed alongside a separate Code of Practice covering labelling and marking of AI-generated content. According to the Commission, the guidelines are intended to address practical and legal questions that fall outside the scope of the code itself.
The issue is becoming more relevant as generative AI tools are integrated into search engines, customer services, content platforms, and communication systems. Regulators are increasingly focused on how users can distinguish synthetic content from human-generated material.
The consultation on the draft text will remain open until 3 June. Feedback from stakeholders will be used to prepare the final version of the guidelines.
One of the central challenges is consistency of enforcement. Because the AI Act will be implemented across all EU member states, the Commission is attempting to reduce differences in interpretation before the transparency obligations begin to apply more broadly in practice.
