Council of Europe examines algorithmic discrimination at January 15 event
At an online event held on 15 January, the Council of Europe examined how algorithmic bias can reinforce social inequalities and presented new guidance on strengthening legal and policy safeguards for artificial intelligence and automated decision-making systems.
On 15 January, the Council of Europe hosted an online event focused on the challenges posed by algorithmic discrimination and the need to strengthen governance frameworks for artificial intelligence (AI) and automated decision-making (ADM) systems. The discussion brought together policymakers, experts, and human rights actors to assess how existing rules address algorithmic bias and where further action is needed.
During the event, two new publications were presented. One analyses legal protections against algorithmic discrimination, while the other provides policy guidance for national equality bodies and human rights institutions working with AI and ADM systems. The publications aim to clarify how equality and human rights standards apply to algorithmic technologies used by both public authorities and private actors.
Algorithmic discrimination has been shown to deepen existing social inequalities. In employment, AI systems trained on historical data can replicate past biases, potentially favouring male candidates or disadvantaging minority groups. Similar risks arise in public-sector uses of AI, including law enforcement, migration, social welfare, justice, education, and healthcare, where profiling tools, facial recognition, and automated assessments can affect access to rights and services.
Private-sector applications also raise concerns. AI systems used in banking, insurance, and human resources can influence credit decisions, pricing, or recruitment outcomes, with discriminatory effects if underlying data or system design is flawed.
The discussion placed these issues within the context of emerging regulatory frameworks, including the EU’s AI rules and the Council of Europe’s own work on artificial intelligence, human rights, democracy, and the rule of law. While these instruments establish important safeguards, the publications presented at the event highlight ongoing gaps in enforcement, oversight, and practical implementation.
National equality bodies and human rights institutions were identified as central actors in addressing algorithmic discrimination. Their roles include monitoring AI and ADM systems, ensuring compliance with legal standards, and promoting rights-based approaches to technology deployment.
The event concluded with practical examples of how European and Council of Europe standards can be applied to public-sector AI initiatives, underlining the need for clearer guidance and stronger institutional capacity to ensure AI systems are fair, accountable, and consistent with fundamental rights.
