APC and Derechos Digitales warn that AI governance must protect cultural rights and diversity

A joint submission by the Association for Progressive Communications (APC) and Derechos Digitales urges UN human rights experts to ensure that artificial intelligence policies uphold cultural rights, safeguard marginalised communities and prevent the reinforcement of historical inequalities.

APC and Derechos Digitales warn that AI governance must protect cultural rights and diversity

The Association for Progressive Communications (APC) and Derechos Digitales have issued a joint submission highlighting the risks artificial intelligence poses to cultural rights and to communities whose identities, languages and histories are already underrepresented in digital spaces. The document responds to a call for input from the UN Special Rapporteur in the field of cultural rights and outlines how AI systems can undermine or, if properly governed, strengthen cultural participation and expression.

The organisations argue that AI development is overwhelmingly concentrated in a small number of geopolitical and corporate centres, which shapes whose knowledge, languages and cultural expressions are preserved or amplified. They warn that datasets used to train AI systems often reproduce historical biases, exclude cultural minorities and perpetuate unequal visibility. These dynamics, they suggest, risk reinforcing cultural hierarchies rather than promoting inclusion.

The submission raises concerns about the widespread extraction of data from communities without their consent or benefit. APC and Derechos Digitales emphasise that cultural data, including linguistic materials, artworks, traditions and community knowledge, require safeguards to prevent misappropriation or distortion. The groups note that AI-generated outputs can misrepresent cultural symbols or traditions, leading to loss of meaning, disrespect or erasure.

APC and Derechos Digitales also highlight risks associated with automated moderation and content curation systems. They report that marginalised communities often experience disproportionate takedowns of cultural content, inaccurate flagging of political or artistic expression, and reduced visibility due to opaque platform algorithms. These effects can limit the right to take part in cultural life, particularly for Indigenous peoples, Afro-descendant communities, LGBTQ+ individuals and speakers of minority languages.

The submission calls for rights-based regulation and meaningful participation of affected communities in shaping AI governance. Recommendations include safeguarding collective data rights, strengthening transparency obligations for companies, and ensuring that public institutions using AI systems prioritise equality, non-discrimination and cultural diversity. The groups also urge states and companies to support open, community-led digital infrastructures that enable preservation and revitalisation of languages and cultural expressions.

The submission concludes that AI governance debates must go beyond technical risk management and address the political and cultural implications of AI deployment. Without structural changes, they warn, AI systems may continue to reproduce global power imbalances and undermine cultural rights instead of supporting them.

Go to Top