Artificial intelligence (AI)

AI is reshaping economies, societies, and governance systems. Its applications span diverse areas such as healthcare, education, transportation, and environmental sustainability, offering tremendous potential to solve complex global challenges. At the same time, the rapid integration of AI raises important concerns, including algorithmic bias, risks to human rights, and the deepening of social and economic inequalities.

Civil society plays a crucial role in ensuring AI development and deployment align with public interest values, including inclusivity, fairness, accountability, and the safeguarding of fundamental rights

Bias and discrimination

AI systems can inadvertently reproduce and amplify societal biases present in their training data. For example, facial recognition technologies have demonstrated disparities in accuracy across different racial groups, raising concerns about equity and fairness in their use.

Transparency and accountability

Many AI systems function as black boxes with opaque decision-making processes. This lack of transparency can complicate efforts to understand and address errors or harms caused by AI applications.

Impact on human rights

AI poses risks to several fundamental rights, including privacy, freedom of expression, and non-discrimination. Automated decision-making in areas such as social services or law enforcement may disproportionately affect vulnerable groups, while surveillance technologies powered by AI can threaten individual privacy.

Economic inequality

While AI has the potential to drive innovation and economic growth, it may also exacerbate inequalities. Automation could disproportionately displace low-wage workers, while the benefits of AI are often concentrated in high-income regions and industries.

Explore how AI systems are governed, the challenges of transparency and accountability, and the evolving regulatory frameworks shaping their future on the Digital Watch Observatory.

Several global and regional initiatives are influencing AI governance frameworks. For instance, the Organisation for Economic Co-operation and Development (OECD) AI Principles encourage trustworthy AI by promoting transparency, accountability, and human rights. UNESCO’s Recommendation on the Ethics of AI focuses on ensuring AI aligns with ethical and human rights principles.

The EU’s AI Act aims to regulate high-risk AI systems while fostering innovation, while the Partnership on AI is a multistakeholder initiative involving industry, academia, and civil society groups to address AI’s societal impact.

By engaging in AI governance, civil society can help ensure that AI technologies are developed and deployed in ways that empower individuals, uphold rights, and contribute to equitable and sustainable development.

  • Promote inclusivity: Support efforts to diversify the voices shaping AI development and governance, ensuring that underrepresented groups, including those from the Global South, are included in decision-making processes.
  • Encourage ethical standards: Engage in the development of ethical AI guidelines and principles, ensuring they align with public interest values and address potential risks to human rights.
  • Strengthen capacity-building: Provide training and resources for communities, organisations, and policymakers to understand and navigate AI technologies and governance structures.
  • Monitor and report impacts: Conduct research and advocacy to highlight the societal and human rights impacts of AI, identifying gaps in existing governance frameworks.
  • Collaborate across sectors: Build alliances with academic institutions, technical communities, and private sector actors to co-develop solutions that address AI’s challenges while maximising its benefits.
Go to Top