Global Digital Justice Forum calls for stronger regulation, transparency and accountability in OHCHR AI–human rights review

Members of the Global Digital Justice Forum have submitted detailed input to the Office of the UN High Commissioner for Human Rights on the use of AI and the UN Guiding Principles on Business and Human Rights. Their submission highlights the human-rights risks of AI deployment across public and private sectors and calls for robust regulation, algorithmic transparency, inclusive participation, public-interest innovation and safeguards against corporate capture.

Global Digital Justice Forum calls for stronger regulation, transparency and accountability in OHCHR AI–human rights review

The Global Digital Justice Forum has submitted a collective response to the OHCHR’s call for inputs for its thematic report on ‘The Use of Artificial Intelligence and the UN Guiding Principles on Business and Human Rights.’ The submission, prepared by leading civil-society organisations from across the Global South, reflects a shared concern that current AI deployment practices by governments and companies risk deepening inequality, infringing fundamental rights and consolidating corporate power over essential public functions. The organisations emphasise that AI systems must be governed under a rights-based framework that protects the public interest and aligns with the UNGPs.

The submission identifies significant human-rights risks emerging from AI procurement and deployment across sectors such as law enforcement, welfare administration, healthcare, education and agriculture. These risks include discriminatory outcomes, opacity in automated decision-making, disproportionate surveillance and exclusionary impacts on marginalised groups. The forum notes that while some regulatory initiatives and promising practices exist, they remain uneven and insufficient. It argues that states must regulate AI not only to mitigate risks but also to advance equitable innovation, ensure public value creation and prevent the public sphere from being shaped by corporate interests.

Central to the submission is a call for stronger accountability frameworks for AI developers and deployers in both the public and private sectors. The organisations argue that governments must establish clear mechanisms for public participation in decisions about AI procurement and use, particularly where systems have high social or rights-based impacts. They also stress the need for effective remedies, including accessible grievance-redress channels, to address harms caused by AI systems.

Algorithmic transparency is highlighted as an essential enabler of rights-respecting AI governance. The submission recommends expanding the right to information to cover AI-related systems and processes, especially in high-risk scenarios. It also suggests that governments explore the creation of public algorithm repositories to improve oversight and transparency. Additional priorities include capacity-building for public-sector officials so they can conduct risk assessments and due diligence, ensuring labour rights protections throughout the AI value chain, scaling public investment in open compute infrastructures and addressing the environmental impacts associated with large-scale AI deployment.

The submission is endorsed by IT for Change, Derechos Digitales, CLADE, Article 19 Mexico and Central America, Research ICT Africa, the Transnational Institute and DAWN. Together, they reaffirm the need for AI governance frameworks that are grounded in human rights, public accountability and democratic participation, ensuring that AI systems serve collective well-being rather than narrow commercial or political interests.

Go to Top