EU study calls for overhaul of AI liability rules
The report warns that current laws, including the updated Product Liability Directive, fail to provide clear accountability or adequate compensation for AI-related harm.

The European Parliament has released the study “Artificial Intelligence and Civil Liability: A European Perspective”, offering a comprehensive analysis of how civil liability frameworks should respond to the challenges posed by AI technologies. Written by Prof. Dr Andrea Bertolini, the report critically assesses existing EU legislation, with a focus on the Product Liability Directive (PLD), and concludes that current rules are inadequate for addressing the complexity, opacity, and autonomy of modern AI systems. The study calls on the European Commission to reconsider and strengthen the proposed AI liability legislation, which was slated for withdrawal earlier this year.
The study outlines how EU policy on AI liability has evolved over the past decade. Initially, liability reform was seen as the cornerstone of AI governance, with the European Parliament’s 2017 resolution on robotics advocating strict or risk-based liability, mandatory insurance, and compensation mechanisms. However, the legislative focus has since shifted toward ex ante risk regulation through the Artificial Intelligence Act (AIA), leaving civil liability issues less harmonised. The possible withdrawal of the proposed AI Liability Directive (AILD) underscores the lack of consensus, raising concerns about divergent national approaches that could fragment the internal market.
A major finding of the study is that both the original PLD and its 2024 revision (PLDr) fail to offer adequate victim compensation for AI-related harm. While the PLDr introduced procedural updates such as evidence disclosure and presumptions of defect, it retains a fault-based structure that is ill-suited for high-risk AI systems. The AILD, intended as a complementary directive, is also criticised for its limited impact and legal complexity.
To overcome these gaps, the study proposes a dedicated strict liability regime for high-risk AI systems (h-AIS). This regime would designate a single operator, defined as the entity controlling and benefiting from the AI, as strictly liable for any harm caused, regardless of fault. Such a model would streamline compensation, reduce litigation costs, and enable effective risk management through insurance and pricing mechanisms.
The study evaluates four policy options: complete withdrawal of the AILD, retaining it unchanged, introducing a fault-based rule for h-AIS, or transforming it into a strict liability framework. The report concludes that the last option would offer the most coherent and effective solution, providing both harmonisation across member states and stronger protection for consumers and businesses.