EU MEPs to scrutinise AI security risks and enforcement gaps in EU rules

A European Parliament committee will examine how AI and cybersecurity laws are applied in practice, with a focus on risks from advanced systems.

EU MEPs to scrutinise AI security risks and enforcement gaps in EU rules

A discussion in the European Parliament’s Internal Market Committee (IMCO) will focus on a practical question. What happens when advanced AI systems create real security risks, and how well do existing EU rules handle that?

The session brings together policymakers and technical experts to look beyond legislation and into implementation. While the EU Artificial Intelligence Act sets out risk categories and obligations, and the EU Cybersecurity Act provides a framework for certification and security standards, both depend on how companies and regulators interpret them in practice.

That gap between rules and execution is at the centre of the discussion. Lawmakers are expected to explore how developers of advanced AI systems assess risks during design and deployment, and whether current compliance approaches are sufficient when systems become more complex or integrated into critical services.

Input from the European Union Agency for Cybersecurity (ENISA) and the European Commission is expected to focus on how risk is evaluated technically, not just legally. This includes how vulnerabilities are identified, how certification schemes apply, and where current frameworks may not fully capture emerging threats.

The timing is also relevant. Proposed updates to the Cybersecurity Act are already under discussion in Parliament, meaning that conclusions from this exchange could feed into how future rules are shaped, particularly where AI and cybersecurity overlap.

Go to Top