Artificial intelligence and human rights: a European Parliament study on repression

The report provides a comprehensive picture of how AI is weaponised for repression, the risks it poses to human rights, and the urgent need for stronger European and international safeguards.

Artificial intelligence and human rights: a European Parliament study on repression

In May 2024, the European Parliament published an in-depth analysis titled Artificial intelligence (AI) and human rights: Using AI as a weapon of repression and its impact on human rights. Prepared for the Subcommittee on Human Rights (DROI), the study examines how artificial intelligence and algorithmic systems are being deployed by governments to monitor, control, and suppress societies, and what this means for democracy and human rights.

The report begins by clarifying the terminology around ‘digital repression,’ ‘algorithmic repression,’ and ‘algorithmic authoritarianism.’ It highlights that while some definitions focus on authoritarian states, the concept applies more broadly to the use of algorithms for coercion, censorship, and surveillance, regardless of regime type. Academic work shows how AI systems can reinforce existing biases, silence dissent, or induce self-censorship. Policy bodies, including the UN, Council of Europe, and EU, have warned of AI’s capacity to undermine freedom of expression and fundamental rights.

Case studies form a large part of the analysis. In China, AI technologies underpin surveillance in Xinjiang and support the evolving Social Credit System, which combines behavioural and financial data to shape compliance. In Russia, laws introduced in 2016 and tightened after the 2022 invasion of Ukraine have expanded algorithmic monitoring. Iran uses AI to police online spaces and suppress opposition, while Egypt and Ethiopia illustrate similar patterns of state control. The report notes that democracies are not immune: the United States deploys AI in domestic surveillance and exports related technologies abroad, while European companies have supplied tools later used in authoritarian settings.

The study also evaluates existing international frameworks. The EU’s AI Act, Ethics Guidelines for Trustworthy AI, and related initiatives represent an attempt to regulate high-risk applications such as biometric surveillance and social scoring. The Council of Europe is drafting a convention on AI, human rights, democracy, and rule of law, while UNESCO, the OECD, and the Global Partnership on AI provide non-binding principles. However, these frameworks remain fragmented and limited in their capacity to prevent misuse.

The final section sets out policy recommendations for the EU and the European Parliament. These include strengthening export controls on surveillance technologies, improving support for civil society and human rights defenders, and using sanctions where appropriate. The report also calls for regulatory convergence with international partners and vigilance against algorithmic abuses within democracies themselves.

By examining the global spread of algorithmic authoritarianism and assessing the shortcomings of existing safeguards, the study underscores the need for a more coherent international response to ensure that technological progress does not come at the expense of human rights.

Go to Top