The warning delivered in Rome this week was unequivocal that the world is already living through an AI-driven child protection emergency, and the speed of escalation is far advanced on the systems designed to stop it.

At the Child Dignity in the Artificial Intelligence Era conference, Kerry Smith, Chief Executive of the Internet Watch Foundation, described a disturbing shift unfolding since early 2023 from artificial intelligence being a theoretical future threat into what she called “a current and accelerating crisis.” AI tools distributing extreme images of child sexual abuse with criminal behind it.

In the past year, the IWF recorded a 380% surge in AI generating child sexual abuse material. Whilst first managing 51 reports in 2023, the scale of cases has exploded to 245 reports containing 7,644 synthetic images in 2024, many of them extreme. The organisation’s analysts now routinely encounter material generated by machines but trained to exploit the real suffering of children.

It’s a frightening reality that the world is dealing with abuse on an even higher, unprecedented scale with the availability of AI and what it’s capable of doing. IWF first started monitoring AI in 2023 and now it is clear the abuse material generated by AI is gendered.

“People think we’re dealing with harmless computer fantasies,” Smith said after the gathering. “We are not. We are dealing with technology that is amplifying misogyny, sexual violence, and the exploitation of real children. This is happening right now, at industrial speed.”

Nearly 40% of AI-generated content identified this year falls into Category A, the most serious classification under UK law for depictions of rape, sexual torture or abuse involving animals.

And the victims imagined by machine-learning models overwhelmingly mirror the victims who are most commonly abused offline. In the cases where sex could be determined, 98% of AI-generated victims were girls.

Kerry said: “That tells us this technology is reproducing and amplifying existing gendered patterns of abuse – not disrupting them. It’s embedding misogyny and child sexualisation even more deeply into the online ecosystem.

“Using AI child sexual abuse material reinforces harmful sexual fantasies and normalises abusive behaviour rather than preventing it.”

Smith used the Rome event to urge the European Union to strengthen its Recast Child Sexual Abuse Directive, currently under negotiation. The IWF is calling for explicit criminalisation of all forms of AI-generated CSAM — creation, possession and distribution — and a ban on “nudification” apps that digitally strip clothing from images of adults and children.

The conference, organised by Fondazione Child ETS with the Child Dignity Alliance, brought together policymakers, technologists, child-protection experts and religious leaders.

Identity Week Europe 2026 features sessions covering AI and Cybersecurity and Identity Governance, focusing on the ethical development and security implications of next-generation technology. Join experts like Philip O’Shaughnessy, Head of Architecture at Metro Bank and Dimitri van Zantvliet, Chief Information Security Officer at Dutch Railways who are ensuring technology is developed with safety and ethical responsibility at its core. Secure your place in Amsterdam on 9-10 June to join the global discussion on governing our digital future.