European Data Protection Board releases practical guide on AI and data protection
The guide outlines practical steps for developing, deploying, and auditing AI in line with EU data protection and cybersecurity standards. Beyond its technical focus, the curriculum provides valuable tools for civil society to assess risks, ensure accountability, and engage in informed dialogue on responsible AI governance.

On 5 June 2025, the European Data Protection Board (EDPB) published a training guide on the Fundamentals of Secure AI Systems with Personal Data, designed to support professionals in the AI and cybersecurity sectors. Authored by Dr Enrico Glerean and developed under the Support Pool of Experts (SPE) Programme with contributions from the Greek Data Protection Authority, the guide offers a structured and practical curriculum for those working with AI systems that process personal data.
This curriculum provides a comprehensive foundation for technical and multidisciplinary teams tasked with ensuring that AI systems are designed and deployed in compliance with EU data protection law, particularly the General Data Protection Regulation (GDPR) and the forthcoming AI Act. Organised into five modules, the training covers all phases of the AI lifecycle—from design and data handling to deployment, monitoring, and auditing.
Key contributions
- Grounded definitions: It clearly distinguishes between AI systems and AI models, elaborates on machine learning methods, and aligns with terminology from the EU AI Act and ISO standards.
- Privacy integration: The curriculum explains how AI interacts with GDPR principles, from data minimisation to the rights of data subjects. It outlines lawful bases for processing and the challenges AI poses to these principles.
- Security and lifecycle management: It addresses cybersecurity in depth, highlighting threats specific to AI systems, such as adversarial attacks and data poisoning, and emphasises secure coding and monitoring practices.
- Auditing tools: It presents practical tools and checklists for assessing compliance, including materials developed by European regulators such as the ICO’s AI Toolkit and the EDPB’s own auditing checklist.
- Open pedagogy: Each chapter includes exercises, case studies, and evaluation criteria to support training and education across sectors.
Relevance for civil society
For civil society organisations (CSOs), this curriculum offers several key benefits. First, it provides transparency into how AI systems are developed and what ethical and legal standards should guide their deployment. Second, it equips civil society actors—particularly those working on digital rights, data governance, or accountability—with a shared vocabulary and concrete tools to assess the risks posed by AI systems.
As governments and private actors deploy increasingly complex AI systems in public services, employment, and surveillance, civil society must be able to scrutinise these technologies effectively. This curriculum supports that role by demystifying technical processes and embedding privacy and ethical considerations into every stage of AI development.
Finally, the open licensing of the curriculum allows civil society groups to adapt and reuse the content in educational initiatives, community audits, or policy advocacy efforts, contributing to wider public understanding and engagement with responsible AI governance.