UNICEF calls for criminalisation of AI-generated child sexual abuse material
UNICEF has urged governments to criminalise the creation and distribution of AI-generated sexualized images of children, citing new evidence that more than one million children reported having their images manipulated into explicit deepfakes in the past year.
UNICEF has called on governments worldwide to expand criminal laws to explicitly cover artificial intelligence-generated child sexual abuse material. The appeal follows what the agency describes as a rapid increase in the circulation of AI-generated sexualized images involving children.
In a statement issued on 4 February 2026, UNICEF said it is ‘increasingly alarmed’ by the rise of so-called deepfakes, images, videos, or audio generated or altered using AI to appear real. In some cases, ordinary photographs of children have reportedly been manipulated using AI tools to create fabricated nude or sexually explicit images, a practice referred to as nudification.
According to a joint study conducted by UNICEF, ECPAT, and INTERPOL across 11 countries, at least 1.2 million children disclosed that their images had been manipulated into sexually explicit deepfakes in the past year. In some countries included in the study, this represented approximately one in 25 children. The agency also reported that in certain locations, up to two-thirds of children expressed concern that AI could be used to create fake sexual images or videos of them.
UNICEF stated that sexualized images of children created or altered using AI tools should be considered child sexual abuse material. The agency emphasised that the harm caused by such content is not diminished by the use of AI, noting that when a real child’s image or identity is used, that child is directly harmed. It also warned that even fully synthetic materials can contribute to the normalisation of exploitation and complicate law enforcement efforts.
The agency called for governments to revise legal definitions of child sexual abuse material to include AI-generated content and to criminalise its creation, possession, and distribution. It also urged AI developers to adopt safety-by-design measures and technical safeguards to prevent misuse of their systems. In addition, UNICEF called on digital platforms to prevent the circulation of such content and to invest in detection technologies that enable immediate removal.
