Europe finalizes AI Code of Practice to guide compliance with AI Law
The General-Purpose AI (GPAI) Code of Practice is a voluntary tool, prepared by independent experts in a multi-stakeholder process, designed to help industry comply with the AI Act’s obligations for providers of general-purpose AI models.

On 10 July 2025, the European Commission officially received the final version of the voluntary General‑Purpose AI Code of Practice. This landmark document was authored by 13 independent experts, drawing on insights from more than 1,000 stakeholders, including AI developers, SMEs, academics, rights-holders, civil society organisations, and an AI safety expert. It arrives in preparation for the AI Act’s general‑purpose AI provisions, which will take effect on 2 August 2025.
Under the AI Act, enforcement begins for new models in August 2026 and for existing models in August 2027. The Code aims to help providers of GPAI models, including powerful systems like ChatGPT, Gemini, and Claude, demonstrate compliance with the Act’s legal requirements for safety, transparency, and copyright handling.
The Code is structured into three thematic chapters: Transparency, Copyright, and Safety & Security for high-capacity systems. The Transparency chapter mandates detailed documentation, including a Model Documentation Form covering training data, performance, usage scenarios, licensing, and more, alongside clear procedures for responding to information requests from downstream providers and regulators. The Copyright chapter provides guidance on drafting copyright policies, ensuring lawful data use, mitigating risks of infringing outputs, and setting up complaint and rights-holder liaison mechanisms. The Safety & Security chapter applies only to providers of ‘systemic-risk’ GPAI models, establishing requirements such as risk assessment, technical controls, external audits, post-market monitoring, and cooperation with the AI Office.
This final version incorporates several key enhancements compared to earlier drafts. It clarifies timeline obligations, including specific deadlines (e.g., 15-day response windows for information requests), and enlarges mechanisms for complaint handling and transparency of training data summaries. High-capacity model safeguards have been strengthened by mandating external audits and ongoing risk monitoring. It also introduces flexibility for SMEs, allowing proportionate compliance aligned with their capacities. Industry groups, including CCIA Europe and large European firms such as Airbus, have voiced concerns that the Code remains overly complex and burdensome. Digital Strategy+7CCIA+7AP News+7.
Although adopting the Code is voluntary, signatories benefit from legal certainty and reduced administrative burden compared to alternative compliance routes. Entities that do not sign up must still demonstrate compliance independently, which could involve more extensive regulatory oversight.
In the coming weeks, the AI Office is expected to issue additional guidance clarifying key concepts and the practical application of GPAI rules. The Code itself is now under review and must be formally endorsed by the European Commission and EU member states, with a decision anticipated later in 2025.
The General‑Purpose AI Code of Practice serves as the EU’s first operational blueprint for how to calibrate AI innovation and fundamental rights protection under a landmark regulatory framework. As discussions continue, organisations developing and deploying AI must monitor upcoming guidance, evaluate whether their models fall within the scope of the Code, and prepare to either adhere to it or choose alternate compliance pathways under the AI Act.