EU AI Act faces complexity test as new report warns of regulatory friction
While the act sets out the world’s first comprehensive legal framework for AI, the report finds that its interaction with existing rules – including the GDPR, Data Act, Digital Services Act and Cyber Resilience Act – may create overlapping compliance obligations and legal uncertainty.
 
										The European Parliament has released a detailed study examining how the EU’s Artificial Intelligence Act (AI Act) interacts with other major digital laws. The study raises concerns about overlapping rules, practical enforcement challenges and implications for European competitiveness.
The report, commissioned by the Parliament’s committee on Industry, Research and Energy (ITRE), evaluates whether the EU’s growing body of digital regulation, including the GDPR, Data Act, Digital Services Act, Digital Markets Act, Cyber Resilience Act and NIS2, forms a coherent system or risks creating a fragmented environment for developers and users of artificial intelligence (AI).
A dense regulatory landscape
Adopted in 2024, the AI Act is the first comprehensive legal framework for AI systems. It bans certain AI practices, introduces strict obligations for high-risk systems, and establishes new rules for general-purpose models, including those considered to pose systemic risks. The report acknowledges the act’s emphasis on protecting fundamental rights, ensuring safety and supporting innovation. However, it notes that the legislation does not operate in isolation.
Many obligations under the AI Act overlap with those in other digital laws. For example, public bodies deploying some types of AI will need to conduct fundamental-rights impact assessments, while also meeting existing GDPR requirements for data-protection impact assessments. In other areas, such as cybersecurity, the AI Act intersects with both the Cyber Resilience Act and NIS2, creating multiple layers of similar reporting and risk-management duties.
The study concludes that the simultaneous application of different laws may lead to duplication, conflicting interpretations and additional administrative burden — particularly for smaller companies without extensive compliance teams.
General-purpose AI and governance challenges
One of the newest features of the AI Act – rules for general-purpose AI systems – is highlighted as a significant shift in EU technology governance. These models must comply with transparency and copyright obligations, and those exceeding a certain computational threshold face extra duties related to risk assessment and security.
The report describes the EU’s governance structure as mixed. Alongside national authorities and notified bodies, a new European AI Office will coordinate enforcement and oversight of general-purpose systems. While this centralisation aims to improve consistency, the study notes possible coordination challenges with data-protection authorities and sector-specific regulators.
Global competition considerations
The report places the EU’s regulatory efforts in a wider geopolitical context. The United States and China account for the majority of global investment in AI and dominate model development. According to figures referenced, Europe has attracted a much smaller share of funding and produces fewer flagship models. Industry voices included in the study argue that compliance uncertainty could weaken Europe’s position further, while civil-society groups warn against easing rules at the expense of fundamental rights.
Recommendations
To navigate these tensions, the authors propose a staged approach:
- Short term: clearer joint guidance between regulators, mutual recognition of assessments and harmonised sandbox practices.
- Medium term: targeted legislative adjustments to reduce duplication and clarify roles.
- Long term: potential consolidation of digital regulation and renewed focus on innovation capacity.
The overarching message is that the EU’s regulatory approach seeks to balance rights and competitiveness, but the density and novelty of overlapping frameworks will require continued calibration. Ensuring coherence across the digital legislative landscape may determine whether Europe can advance its AI ambitions without undermining the trust-and-safety goals at the core of its policy model.
 
			
											
				 
					