New guide outlines how civil society organisations can approach generative AI responsibly

A newly published Generative AI Guide for Civil Society offers a practical framework to help civil society organisations navigate the risks and responsibilities associated with generative artificial intelligence, without encouraging its adoption.

New guide outlines how civil society organisations can approach generative AI responsibly

A new guide aimed at civil society organisations (CSOs) sets out how generative AI can be approached in a way that aligns with human rights, accountability, and organisational values. Published in December 2025 by the Digital Justice Network, Parti Co-op, and the Institute for Digital Rights, the Generative AI Guide for Civil Society responds to growing concerns among activists about the unregulated and ad-hoc use of tools such as text, image, and video generators.

Rather than promoting the use of generative AI, the guide explicitly positions itself as a resource for risk management and governance. It highlights the absence of organisational policies within many CSOs, warning that unchecked use of generative AI can undermine credibility, expose sensitive data, reproduce social biases, and weaken internal deliberation and capacity development.

The document is structured around three core elements. First, it introduces key technical concepts, such as large language models, hallucinations, and training data, to support informed decision-making by non-technical audiences. Second, it examines the broader social implications of generative AI, including labour exploitation in data labelling, environmental costs linked to data centres, copyright disputes over training data, and risks to the public information ecosystem through misinformation and deepfakes.

Third, and most centrally, the guide proposes a modular policy framework that organisations can adapt to their own context. This framework is built around principles of human responsibility, transparency, data protection, non-discrimination, and environmental awareness. It includes practical guidance on when generative AI use should be restricted, how outputs should be reviewed and disclosed, and how organisations can respond to incidents or misuse.

The authors stress that there is no single ‘correct’ generative AI policy for civil society. Instead, they frame policy development as a participatory process that should reflect each organisation’s mission, risk tolerance, and ethical commitments. In doing so, the guide positions civil society not only as a user of AI tools, but also as an actor with a role in shaping norms and expectations around responsible AI development and deployment.

While acknowledging the rapid spread of generative AI in everyday digital tools, the guide concludes that restraint, transparency, and collective governance are essential if civil society organisations choose to engage with these technologies at all.

Go to Top