How to address global crisis of child sexual abuse imagery

AI-generated child sexual abuse material (CSAM) is rapidly becoming one of the most dangerous threats to children online. With the rise of generative artificial intelligence, criminals are now able to create hyper-realistic abuse images and videos of children who do not exist or digitally manipulate existing materials to appear newly abusive. The result is a deeply troubling loophole in the global child protection system—one that current laws and safeguards are not prepared to handle.
What makes this crisis particularly urgent is that no real child needs to be present for harm to be done. AI-generated CSAM often draws from databases of existing child images, sometimes of past abuse survivors, and renders new explicit content. Survivors have spoken out about the trauma of knowing their images are being manipulated and repurposed without their consent.
The psychological toll is real, and the legal consequences remain unclear in many jurisdictions.
Europol’s recent global operation that led to the arrest of 25 individuals involved in creating and sharing AI-generated CSAM is evidence of how widespread and organized this abuse has become. Investigators found that offenders were using publicly available AI tools to manufacture CSAM that could evade conventional detection systems. Unlike traditional CSAM, which can be flagged through hashing technologies, AI-generated abuse content is novel with each iteration – making it nearly impossible to trace with current tools.
Some of the AI-generated CSAM content mimics real children so convincingly that even trained analysts cannot distinguish between fabricated and authentic abuse. Predators use these tools to both create and distribute material that is difficult to regulate, especially on the dark web and in encrypted communities where much of this activity thrives.
In the UK, the National Crime Agency has labeled this development a “nightmare scenario” and is pushing for urgent reforms. Current laws are typically structured around the protection of real children depicted in abuse imagery, but AI muddies those definitions. If a child doesn’t exist, is the material still illegal? Increasingly, experts argue yes—because the intent, effect, and impact are equally harmful. Some jurisdictions have begun amending their laws to explicitly criminalize AI-generated CSAM, but enforcement remains difficult.
The most dangerous implication of this technology is scale. One offender can now use AI to generate thousands of images in minutes. That content is then circulated globally, further overwhelming already strained detection systems. As the technology becomes more sophisticated and accessible, the problem will grow exponentially – especially as detection lags behind creation.
In countries like Kenya, where internet access is rising rapidly, awareness of AI-generated threats is low. A recent report revealed a growing gap in online child protection. While digital literacy campaigns have helped raise general awareness, many parents, teachers, and policymakers remain unaware of the threats posed by synthetic content. AI-generated CSAM adds complexity to an already under-resourced system and calls for an urgent review of Kenya’s digital child safety policies.
Legal reform is only one part of the solution. Technology companies must also step up by developing tools that can detect AI-generated content before it spreads.
AI must be governed by a clear moral boundary. Children – real or synthetically depicted – must be off-limits. Technology must never become a weapon of exploitation.
— The writer is a Machine Learning Researcher, Technology Policy Analyst and a Columnist