The European Data Protection Supervisor issues guidance on managing AI risks in institutions
The publication sets out a structured methodology based on recognised risk-management standards, linking potential data-protection harms to each stage of the AI lifecycle. It offers practical measures to address fairness, accuracy, security and explainability, and provides technical tools to help controllers ensure compliance with fundamental-rights obligations when deploying or procuring AI.
The European Data Protection Supervisor has released comprehensive guidance on how EU institutions should identify and mitigate data-protection risks when developing, procuring or deploying AI systems. The document outlines a structured methodology for assessing risks to individuals’ rights under Regulation 2018/1725 (EUDPR) and provides technical measures to help controllers manage fairness, accuracy, data minimisation, security and data-subject rights in AI environments.
The guidance stresses that AI systems introduce new layers of complexity due to their reliance on large datasets, adaptive behaviour and opaque decision-making mechanisms. To address this, it frames risk management through the ISO 31000:2018 model, detailing phases from risk identification to treatment. It also maps risks onto the full AI lifecycle, from inception and data preparation to deployment, monitoring, continuous validation and retirement. For institutions procuring AI systems, the EDPS outlines how risk evaluation must begin already at the tendering stage to avoid locking public bodies into systems that cannot meet regulatory requirements.
Much of the document focuses on fairness and accuracy. It highlights how poor-quality or unrepresentative training data, algorithmic design choices, and human interpretative errors can lead to discriminatory or unreliable outputs. The EDPS offers specific technical controls, such as dataset audits, bias-mitigation techniques, representative sampling, feature-engineering safeguards, interpretability tools like LIME or SHAP, and statistical validation methods. It also underscores the importance of explainability as a prerequisite for compliance, noting that controllers must understand how models operate in order to detect misuse, evaluate errors and ensure accountability.
The guidance provides EU institutions with a systematic framework for assessing and reducing AI-related risks to fundamental rights. While it does not replace full compliance assessments, it sets out detailed steps, examples and checklists intended to support controllers in navigating the technical and organisational challenges posed by increasingly complex AI systems
