European Commission issues guidelines on obligations for general-purpose AI models ahead of AI act implementation
The guidelines represent the Commission’s interpretation of the AI Act and will be used to guide enforcement. While not legally binding, they provide a practical framework for identifying GPAI models, determining provider responsibilities, and understanding exemptions.

On 18 July 2025, the European Commission published guidelines clarifying the obligations of providers of general-purpose AI (GPAI) models under the AI Act (Regulation EU 2024/1689). These guidelines are part of a broader package supporting the entry into application of Chapter V of the AI Act on 2 August 2025, which introduces EU-wide rules for GPAI model providers. They complement the General-Purpose AI Code of Practice, submitted to the Commission by independent experts on 10 July 2025.
The guidelines define GPAI models as those capable of performing a wide range of distinct tasks and being integrated into various downstream systems. A model is generally considered to fall under this category if it is trained with more than 10²³ floating-point operations (FLOP) and can generate text, audio, image, or video outputs. Models that exceed 10²⁵ FLOP in training compute or meet certain criteria outlined in the AI Act may be classified as presenting systemic risk, which triggers additional obligations.
Providers of GPAI models must comply with documentation and transparency requirements, ensure copyright compliance, and publish summaries of training data. If a model poses systemic risk, further measures apply, including continuous risk assessment, reporting of serious incidents, and maintaining cybersecurity standards throughout the model’s lifecycle.
The guidelines also clarify exemptions for open-source GPAI models. Providers can be exempt from some obligations if their models are released under free and open-source licences, provided there is no monetisation, and the model’s architecture, parameters, and usage information are publicly accessible. However, models classified as having systemic risk are not eligible for such exemptions.
Developers who significantly modify GPAI models may themselves be considered providers, particularly if the modification uses more than one-third of the original model’s training compute. In such cases, they are subject to the same regulatory requirements, including potential obligations related to systemic risk.
While the guidelines are not legally binding, they reflect the Commission’s interpretation of the AI Act and form the basis for enforcement by the AI Office. Providers are encouraged to adhere to the Code of Practice to simplify compliance and reduce administrative burdens. The Commission may revise the guidelines in the future to reflect developments in technology and practice.