EU Parliament moves to ban AI-generated child sexual abuse material as its spread accelerates

8 hours ago 1
ARTICLE AD BOX

The EU Parliament wants to ban AI-generated child sexual abuse material (CSAM) as part of a new directive, citing a rapidly growing threat. The Internet Watch Foundation (IWF) has warned that AI-created abuse content is escalating at an alarming rate.

According to the IWF, the first confirmed case of AI-generated CSAM appeared in 2023. Just one year later, reports have surged by 380%, with 245 incidents in 2024 involving more than 7,600 images and videos.

The IWF says the most severe category of abuse (Category A under UK law) makes up nearly 40% of AI-generated CSAM—almost double the proportion seen in traditional cases. About 98% of this synthetic material depicts girls, a slight increase from the 97% seen across all forms of CSAM.

Offenders are now using tools like text-to-image generators and "nudify" apps, according to the IWF. The most advanced AI systems can even create hyper-realistic short videos.

Ad

THE DECODER Newsletter

The most important AI news straight to your inbox.

✓ Weekly

✓ Free

✓ Cancel at any time

"What we're seeing now is highly realistic abuse imagery being generated with minimal technical skill. This technology is being exploited to cause real harm to children," said Dan Sexton, the IWF's Chief Technology Officer. The report also highlights that in the most disturbing cases, AI models are being trained on real abuse images.

EU Parliament pushes for strict ban, criticizes Council proposal

EU law currently lacks explicit rules on synthetic abuse material. The EU Parliament has taken a firm stance in the proposed Child Sexual Abuse Directive (CSAD). Lawmakers want to fully criminalize AI-CSAM, including possession for "personal use," and reject any exceptions. They are also calling for clearer definitions, better detection tools, and stronger cross-border cooperation for police and child protection agencies.

The EU Council's current position is under heavy fire. Its draft version of the CSAD would allow people to possess AI-generated abuse images for "personal use," something the IWF calls a "deeply concerning loophole."

The IWF and its partners are pushing to close this gap, arguing there is no such thing as harmless abuse material. Another challenge: AI-generated CSAM can make it harder to identify real cases of abuse.

Alongside the push for a full ban, the organization is also demanding an EU-wide prohibition on guides, instructions, and models used to create CSAM, plus better support for survivors. The new directive is still under negotiation.

Recommendation

Read Entire Article
LEFT SIDEBAR AD

Hidden in mobile, Best for skyscrapers.