UK calls on platforms to strengthen protections against online violence targeting women and girls
Online service providers are being called on to take stronger action against digital harms, including AI-generated abuse, as part of efforts to reduce violence against women and girls.
Online service providers have been urged to strengthen protections against abuse targeting women and girls, following a letter from Liz Kendall, Secretary of State for Science, Innovation and Technology.
The letter outlines expectations for platforms under the Online Safety Act and sets out measures to address online harms, including those enabled by AI.
It confirms that sharing or threatening to share sexually explicit deepfake images without consent is a criminal offence, and that the non-consensual creation of such content has also been criminalised. These offences are being designated as priority under the Act, requiring platforms to prevent and remove such material.
Further measures include proposed restrictions on so-called “nudification” tools, which can generate fake intimate images, and extending illegal content obligations to AI chatbots.
Platforms will also be required to remove non-consensual intimate images within 48 hours. The measure is intended to reduce the burden of repeated reporting on victims.
The letter refers to guidance issued by Ofcom, which identifies risks such as harassment, stalking, coordinated abuse, and image-based violence. Recommended actions include conducting risk assessments focused on harms to women and girls, strengthening privacy settings, limiting the visibility of harmful content, and introducing controls to prevent coordinated attacks.
Platforms are expected to implement these measures by the end of 2026, with progress to be monitored.
