ECNL and the Danish Institute for Human Rights publish guide on Fundamental Rights Impact Assessments under the EU AI Act

ECNL and the Danish Institute for Human Rights have released a new guide on how to conduct Fundamental Rights Impact Assessments for high-risk AI systems. The publication focuses on turning FRIA requirements under the EU AI Act into meaningful governance practices rather than formal compliance exercises.

ECNL and the Danish Institute for Human Rights publish guide on Fundamental Rights Impact Assessments under the EU AI Act

The European Center for Not-for-Profit Law (ECNL) and the Danish Institute for Human Rights (DIHR) have published A Guide to Fundamental Rights Impact Assessments (FRIA), setting out practical guidance on how organisations can implement the EU AI Act’s new requirements on assessing impacts on fundamental rights.

The EU AI Act introduces Fundamental Rights Impact Assessments as a new obligation for certain deployers of high-risk AI systems. These assessments are intended to ensure that potential impacts on rights protected under the EU Charter of Fundamental Rights are identified and addressed before systems are put into use. The guide argues that the value of FRIAs depends on whether they inform deployment decisions, rather than being used to justify decisions that have already been made.

Structured around five phases, the guide outlines a process for integrating fundamental rights considerations throughout the AI lifecycle. It emphasises structured internal discussion, documentation of risks, and alignment with international and regional human rights standards, while responding to practical challenges faced by organisations in implementation.

The publication is primarily aimed at public authorities and other deployers of high-risk AI systems, though it is also positioned as a resource for organisations more broadly that seek to deploy AI in a rights-respecting manner.

The guide was developed with contributions from experts and civil society organisations and was funded by the European Artificial Intelligence & Society Fund.

Why it matters for civil society

Fundamental Rights Impact Assessments will directly influence whether and how high-risk AI systems are deployed, particularly by public authorities in areas such as policing, welfare, migration, and public services. For civil society, FRIAs therefore represent a key accountability mechanism rather than a technical formality.

The guide provides a clear benchmark for what a meaningful FRIA should look like, enabling civil society organisations to assess whether rights risks are being genuinely addressed or reduced to box-ticking. By translating legal obligations under the EU AI Act into concrete processes, it supports more effective scrutiny, advocacy, and engagement with oversight and decision-making around AI deployment.

Go to Top