UNICEF Warns of Surge in AI-Generated Sexual Abuse Images of Children
UNICEF said levels of concern varied widely between countries, highlighting significant gaps in awareness, prevention and protection measures.
UNICEF has issued an urgent warning over the rapid rise of AI-generated sexualised images of children, including manipulated photographs and so-called "deepfakes", describing the trend as a fast-growing global child protection crisis.
UNICEF says artificial intelligence tools are increasingly being used to create, alter and sexualise images of children, including through "nudification" technologies that digitally remove or alter clothing to fabricate nude or sexualised images that appear real.
"Deepfake abuse is abuse," UNICEF said."There is nothing fake about the harm it causes."
New Evidence Reveals Scale of the Threat
New findings from a joint UNICEF, ECPAT and INTERPOL study across 11 countries confirm the alarming scale of the problem.
Key findings include:
-
At least 1.2 million children reported that images of them had been manipulated into sexually explicit deepfakes in the past year
-
In some countries, this equates to 1 in every 25 children — roughly one child in a typical classroom
-
In several countries, up to two-thirds of children said they fear AI could be used to create fake sexual images or videos of them
UNICEF said levels of concern varied widely between countries, highlighting significant gaps in awareness, prevention and protection measures.
AI-Generated Abuse Is Child Sexual Abuse Material
UNICEF stressed that sexualised images of children created or manipulated using AI tools must be treated as child sexual abuse material (CSAM) under the law.
"When a child's image or identity is used, that child is directly victimised," UNICEF said."Even where no child is immediately identifiable, AI-generated child sexual abuse material normalises exploitation, fuels demand for abuse, and makes it harder for law enforcement to identify children who need help."
The organisation warned that AI-generated CSAM poses serious challenges for policing and victim identification, while accelerating the spread of abuse at unprecedented speed and scale.
Risks Amplified by Social Media Platforms
UNICEF said the risks are compounded when generative AI tools are embedded directly into social media platforms, allowing manipulated images to circulate rapidly before effective intervention occurs.
While welcoming efforts by some AI developers to implement safety-by-design safeguards, UNICEF warned that protections remain uneven and insufficient across the sector.
"Too many AI models are being developed without adequate guardrails," the organisation said.
UNICEF Calls for Urgent Global Action
UNICEF urged governments, technology companies and AI developers to act immediately to confront the escalating threat, calling for:
-
Governments to expand legal definitions of CSAM to explicitly include AI-generated content, and to criminalise its creation, possession and distribution
-
AI developers to implement robust safety-by-design approaches to prevent misuse of their systems
-
Digital and social media companies to prevent the circulation of AI-generated CSAM — not merely remove it after harm has occurred — and to invest in real-time detection technologies so abusive content can be taken down immediately
"Children Cannot Wait"
UNICEF warned that delays in regulation and enforcement risk leaving millions of children exposed.
"The harm from deepfake abuse is real and urgent," UNICEF said."Children cannot wait for the law to catch up."
ALSO READ
-
UNICEF Calls for Global Action Against AI-Generated Child Abuse Content
-
UNICEF Urges Action Against AI-Generated Child Abuse Imagery
-
The AI Crossroads: How Artificial Intelligence Could Stall, Slow, or Surge by 2030
-
UNICEF Warns 450,000 Children at Risk of Malnutrition Amid Jonglei Violence
-
Did artificial intelligence really drive layoffs at Amazon, other firms? It can be hard to tell