Artificial intelligence (AI)
Why AI matters
AI is reshaping economies, societies, and governance systems. Its applications span diverse areas such as healthcare, education, transportation, and environmental sustainability, offering tremendous potential to solve complex global challenges. At the same time, the rapid integration of AI raises important concerns, including algorithmic bias, risks to human rights, and the deepening of social and economic inequalities.
Civil society plays a crucial role in ensuring AI development and deployment align with public interest values, including inclusivity, fairness, accountability, and the safeguarding of fundamental rights
Key issues in AI governance
Bias and discrimination
AI systems can inadvertently reproduce and amplify societal biases present in their training data. For example, facial recognition technologies have demonstrated disparities in accuracy across different racial groups, raising concerns about equity and fairness in their use.
Transparency and accountability
Many AI systems function as black boxes with opaque decision-making processes. This lack of transparency can complicate efforts to understand and address errors or harms caused by AI applications.
Impact on human rights
AI poses risks to several fundamental rights, including privacy, freedom of expression, and non-discrimination. Automated decision-making in areas such as social services or law enforcement may disproportionately affect vulnerable groups, while surveillance technologies powered by AI can threaten individual privacy.
Economic inequality
While AI has the potential to drive innovation and economic growth, it may also exacerbate inequalities. Automation could disproportionately displace low-wage workers, while the benefits of AI are often concentrated in high-income regions and industries.
Forums shaping AI governance
Several global and regional initiatives are influencing AI governance frameworks. For instance, the Organisation for Economic Co-operation and Development (OECD) AI Principles encourage trustworthy AI by promoting transparency, accountability, and human rights. UNESCO’s Recommendation on the Ethics of AI focuses on ensuring AI aligns with ethical and human rights principles.
The EU’s AI Act aims to regulate high-risk AI systems while fostering innovation, while the Partnership on AI is a multistakeholder initiative involving industry, academia, and civil society groups to address AI’s societal impact.