The EU Commission opens consultation on high-risk AI systems under the AI act
The consultation is open to a wide range of stakeholders, including AI developers, deployers, businesses, public authorities, researchers, and civil society organisations.

On 6 June 2025, the European Commission launched a public consultation focused on the implementation of the Artificial Intelligence Act’s rules concerning high-risk AI systems. The consultation seeks feedback to support the development of future Commission guidelines, including how AI systems are classified as high-risk and what obligations apply along the AI value chain.
The AI Act, which entered into force on 1 August 2024, establishes harmonised rules for AI across the European Union. It aims to promote trustworthy and human-centric AI while safeguarding health, safety, fundamental rights, and democratic values. Under the Act, some AI systems are designated as ‘high-risk’ due to their potential to impact safety or fundamental rights. These include AI systems integrated into products regulated by EU safety laws, and systems used in specific areas such as employment, education, law enforcement, or access to essential services.
To ensure proper classification and regulation, the Commission is required to issue guidelines on Article 6 of the AI Act by February 2026. These will include practical examples and explanations of how high-risk AI systems should be identified and governed. Additional guidance is also planned for requirements and responsibilities placed on both providers and users of these systems, as outlined in Chapter III of the Act.
The consultation is open to a wide range of stakeholders, including AI developers, deployers, businesses, public authorities, researchers, and civil society organisations. Participants can contribute by sharing examples and insights that clarify the classification of high-risk AI systems and how compliance obligations should be interpreted. Responses will inform the Commission’s upcoming guidelines and support consistent implementation across the EU.
The consultation runs until 18 July 2025 and is available in English. Contributions may be published, but respondents can request anonymity. A summary of the results will be published based on aggregated data.
The questionnaire is divided into five sections:
- Sections 1 and 2 address the classification of high-risk AI systems under Articles 6(1) and 6(2) and related annexes of the AI Act.
- Section 3 includes broader classification issues, such as intended purpose and overlaps between categories.
- Section 4 concerns requirements and obligations for high-risk AI systems, including value chain responsibilities and definitions such as “substantial modification.”
- Section 5 invites views on whether the current list of high-risk use cases and prohibited practices should be revised.
This consultation supports the ongoing effort to implement the AI Act effectively and transparently, ensuring that stakeholders have a clear understanding of their roles and responsibilities.