Artificially Generated Child Sexual Abuse Material is not a victimless crime
The often-raised theory that AI-CSAM is harmless or can even make children safer, must be quashed
“If the tech keeps them away from actual children then it’s a step to a safer world to grow up in.” “If these AI programs stop actual abuse/rapes it’s a victimless crime and shouldn’t be illegal.” “I wonder if this can be used in a harm reduction capacity.”
AI-CSAM poses numerous risks to children, and for the world to take this problem seriously, we all need to understand why that is
These are some of the comments on a YouTube video of a BBC interview with me in June 2023. In that interview, I was talking about my research into the horrific subject of AI-generated child sexual abuse material (AI-CSAM). Since then, I have read and heard versions of those comments thousands of times, both on and offline. This isn’t a fringe view; it’s an idea that comes intuitively to many. Indeed, it is so popular that I have reluctantly given it a name: the substitute hypothesis. After initially expressing disgust when hearing about AI-CSAM, subscribers to the substitute hypothesis wonder whether this material can serve as a harmless replacement for real content, and in this way benefit children. But the theory is flawed almost beyond comprehension, and its ubiquity is deeply concerning to me. AI-CSAM poses numerous risks to children, and for the world to take this problem seriously, we all need to understand why that is.
I first encountered harmful content sexualising children on the public gallery of the AI program Midjourney, which led to an investigation with The Times. Unfortunately, Midjourney, a closed source AI tool, was just the tip of the iceberg here. Many more individuals were using variants of Stable Diffusion, a program released by UK company Stability AI, to generate sexually explicit images of children being abused and violently raped by adults. Stable Diffusion’s code is open source, making it publicly viewable, downloadable, and (most significantly) modifiable. The safety guardrails can be easily removed, and the program can be re-trained on new datasets, offline, by anyone. This gives child predators the ability to create extreme and photorealistic AI-CSAM at the click of a button. The decision to release the program openly, in August 2022, was made by then-CEO Emad Mostaque, about whom I wrote in The Critic in November 2023; in March of this year Mostaque resigned amidst internal conflict and financial chaos at the company.
My story with the BBC was the first to expose a thriving trade in AI-CSAM on the surface web. Creators were using Stable Diffusion to generate the content, the Japanese image-sharing site Pixiv to share it, and the San Francisco-based creator subscription platform Patreon to sell it. (I was worried about naming those websites in my interview with the BBC, but was told — and now agree — that the risk of directing predators to content is outweighed by the need to hold platforms accountable.) One of the greatest threats surrounding AI-CSAM originates from sites that host it. In certain countries, like Japan, AI-CSAM is legal, while CSAM of real children is not (Japan only made CSAM illegal in 2014, after much resistance). As a result, communities where paedophiles gather online — which would previously have been confined to the dark web — now thrive in open internet spaces. Last year at least, Pixiv comments sections were clearly hubs for paedophiles. A picture of a clothed child on Pixiv — billed as a preview of a paywalled Patreon release — was met with comments including “Girls as young as 4-6 dress like this and society has the audacity to expect us not to rape them?” “Pedos unite,” wrote one user, to which another replied, “we are everywhere.” Expressing a desire to abuse — even to abduct and kill — real children, particularly family members, seemed to be accepted and commonplace among these groups. One commenter on Pixiv asked: “Guys, should I rape and kill my newborn niece?” Ten other users replied in the affirmative. “Make sure her corpse is unrecognisable,” wrote one. AI-CSAM heightens, rather than diminishing, violent paedophilic desires.
In the comments sections accompanying AI-generated images of children, I found that users frequently offered trades of real images of child sexual abuse, sharing contact details for private messaging services like Session — an app targeted at “people who want absolute privacy and freedom from any form of surveillance”. When I spoke to the NSPCC they identified this behaviour as “breadcrumbing”: the practice of offenders leaving a digital trail on popular websites to lead fellow paedophiles to hidden illegal content. Overall, the NSPCC described the network of these websites as a figurative “town square for offenders”. More recent (and yet t0 be published) research of mine has found several social media platforms to be serving the same purpose.
Conspicuously absent from these de facto digital forums for paedophiles is any variation of the substitute hypothesis. Instead, users seem thrilled by the development of this new technology, seeing it not as a substitute to real child sexual abuse content, but as an exciting supplement. After all, AI-CSAM carries its own unique qualities: a predator can create or commission an image depicting a child in a specific fetish act, a child of a specific racial or ethnic background, or in a specific location. The grooming risks are also terrifying: a predator could generate an image of themself in a sexual situation with a real child, which could be used to target that child or their friends. Images of multiple children can be merged, generating new images of an artificial child that cannot easily be traced to the originals, hindering investigators from tracking abuse material of real children. This was something I witnessed on Midjourney early on in my research.
Law enforcement in the UK seems to be taking the matter seriously. In a speech last year, Graeme Biggar, Director General of the NCA, listed AI-CSAM as one of the key new criminal threats facing the UK. According to Biggar, the viewing of AI-CSAM “materially increases the risk of offenders moving on to sexually abusing children themselves.” The NCA attributes the rise in the estimated number of adults who pose a sexual risk to children (now one in 50 men or 1.3 – 1.6 per cent of all adults) partly to the radicalising effect of internet groups in which images of children being raped are widely available and are normalised and discussed.
This suggestion that paedophilic desires can develop, rather than being necessarily static and unchanging, is backed up by research. The Lucy Faithfull Foundation, a child sexual abuse prevention charity, has found that many who view child sexual abuse content online do not have a preexisting sexual preference for children. Rather, they become drawn to CSAM as an outgrowth of an escalating online pornography habit. This possibility heightens the danger posed by AI-CSAM. Because AI images are more readily available than real content, and AI content invariably signposts to real CSAM, the gateway risks are undeniable. Individuals may begin exploring AI images — particularly if they consider them to be harmless — and subsequently develop a preference for real content — and real children.
If any reader still sympathises with the substitute hypothesis, a curious episode in U.S. history may change this.
In the mid 1990’s, as with AI today, fears around the internet were proliferating, One concern in particular was absolutely justified, but was about 25 years premature. The Bill which led to the U.S. Child Pornography Prevention Act of 1996 stated that “computer imaging technologies make it possible to produce […] depictions of what appear to be children engaging in sexually explicit conduct that are virtually indistinguishable to the unsuspecting viewer from unretouched photographic images of actual children”. The dangers listed were extensive and accurate, including the general sexualisation of minors, the tailoring of depictions of child sexual abuse to individual preferences, and grooming. Artificial child sexual abuse material became outlawed in the U.S.
The reaction to the Bill’s passing was predictable: while many expressed disgust at the idea of computer-generated sexual abuse material of imaginary children, they also asked: who was the victim here? And if there was no victim, how was it constitutional to ban it? Shouldn’t the right to free speech include the right to create pictures of whatever we want? Spearheading legal efforts to re-legalise such content, on the grounds that the CPPA violated the First Amendment, was the Free Speech Coalition, a trade association of the U.S. adult entertainment industry. Its first legal success came in late 1999, when the Ninth Circuit found that Congress could not constitutionally prohibit virtual child pornography. This was followed by the Ashcroft v. Free Speech Coalition ruling in 2001, when the Supreme Court struck down relevant portions of the CPPA, judging them to be overbroad. Justice Anthony M. Kennedy, who delivered the 6-3 majority opinion, wrote that the CPPA “prohibits speech that records no crime and creates no victims by its production.’
But this decision had a destructive consequence: subsequent defendants in U.S. CSAM cases “almost universally” began contending that the images in question could be virtual, and therefore legal. (U.S. law still uses the term “child pornography” rather than CSAM, but it is generally no longer considered acceptable terminology). In response, just a year later, the 2003 PROTECT Act was enacted, which made most computer-generated CSAM illegal again. According to the Act, the onus had been placed on the government in the past year “in nearly every child pornography prosecution, to find evidence that the child was real”. Some defences of this kind had been successful. Disturbingly, the number of prosecutions being brought was significantly reduced, because the resources required for each case were so much higher. The same official document notes that, because the very existence of computer-generated CSAM raised the possibility of reasonable doubt that any digital image was of a real child, laws to protect real children were on the verge of becoming “unenforceable”. Finally, the additional requirement that had been imposed by some courts that the government prove that the defendant was aware the image depicted a real child, actually threatened to result in the “de facto legalization of the possession, receipt, and distribution of child pornography for all except the original producers of the material”. These devastating effects should inform any lawmakers attempting to clarify laws on AI-CSAM — and deter any attempting to legalise it.
Thanks to the PROTECT Act, AI-CSAM is illegal in the U.S. in almost all cases. On the 20th May, it was announced that a Wisconsin man, Steven Anderegg, had been arrested and charged by the FBI for “alleged production, distribution, and possession of AI-generated images of minors engaged in sexually explicit conduct and his transfer of similar sexually explicit AI-generated images to a minor.” Evidence from Anderegg’s laptop revealed that he had used Stable Diffusion with “add-ons created by other Stable Diffusion users that specialized in producing genitalia” to create photo-realistic images of children; more than 13,000 AI images were found on his laptop, many of them depicting children in sexually explicit acts.
But why have there been so few arrests of this kind? Why has this content not been eliminated from large platforms? And why is the subject frequently omitted from discussions around the harms of AI? I suspect attitudes — the substitute hypothesis? — have played a part in this, not just from the general public but perhaps by those in government.
As AI-generated content of this kind becomes more widespread, the dangers of considering it harmless are becoming manifest. In February, the Lucy Faithfull Foundation launched a campaign specifically raising awareness about AI-CSAM, revealing that its “Stop it Now” helpline was already receiving calls specifically regarding AI content, with some callers unaware that it was illegal, and also questioning whether it is morally wrong.
The charity is right to be concerned, and to be raising awareness about both the harms and illegal status of AI-CSAM. As I have made clear, this content endangers real children in numerous ways. It creates online hotspots where paedophiles gather, it often leads paedophiles toward CSAM of real children, it creates gateway risks toward child abuse offline, and it dramatically complicates the job of law enforcement who must now identify not just CSAM but also identify which material is real and which is not.
AI-CSAM is no substitute, only a gateway, and its creation and distribution is anything but a “victimless crime”. Those who fill internet forums or in-person debates with arguments that no real children are harmed by this content are — however unwittingly — indirectly harming children themselves.
Enjoying The Critic online? It's even better in print
Try five issues of Britain’s most civilised magazine for £10
Subscribe