Scroll Top

BBC exposes illegal trade in AI-generated child sexual abuse images


BBC exposes illegal trade in AI-generated child sexual abuse images
Source: BBC (via

The BBC has uncovered a disturbing trend in which paedophiles are exploiting artificial intelligence (AI) technology to create and distribute realistic child sexual abuse material. These criminals are utilizing AI software, including Stable Diffusion, originally developed for artistic purposes, to generate lifelike images depicting the rape of infants and toddlers.

According to the investigation, some offenders gain access to these explicit images by subscribing to accounts on popular content-sharing platforms like Patreon. Patreon, in response to the findings, stated its “zero tolerance” policy toward such content on its site. However, the National Police Chief’s Council has criticized platforms that profit from this illicit trade without assuming moral responsibility.

GCHQ, the UK government’s intelligence and cyber agency, acknowledged the report and warned that offenders involved in child sexual abuse increasingly exploit new technologies, with AI-generated content considered a concerning future trend.

Stable Diffusion, the AI software employed by the abusers, allows users to describe the desired image using prompts, after which the program generates the corresponding image. This ease of use has facilitated the creation of a massive volume of AI-generated child abuse imagery, as discovered by freelance researcher and journalist Octavia Sheepshanks, who contacted the BBC through the NSPCC children’s charity to raise awareness about her findings.

Law enforcement agencies, responsible for investigating online child abuse, have already encountered these AI-generated images. The National Police Chiefs’ Council emphasized that the possession, publication, or transfer of AI-generated “pseudo images” depicting child sexual abuse is illegal and can contribute to the escalation of real-life offenses.

The distribution of these abuse images occurs through a three-stage process. First, paedophiles create the images using AI software. They then promote these pictures on platforms like Pixiv, a popular Japanese social media platform primarily used by artists sharing manga and anime. Finally, they direct potential customers to their more explicit content on platforms such as Patreon, where individuals can pay to access the illicit material.

Pixiv, being hosted in Japan where the sharing of sexualized cartoons and drawings of children is not illegal, has become a hub for these offenders to promote their work. Although Pixiv has banned photo-realistic depictions of sexual content involving minors, the BBC investigation reveals that users continue to share links to real abuse material within the platform.

The investigation also uncovered the presence of Patreon accounts offering AI-generated, photo-realistic obscene images of children for sale. Pricing varies depending on the type of material requested, with some accounts advertising “exclusive uncensored art” for a monthly fee.

Patreon, valued at approximately $4 billion (£3.1 billion), claims to have implemented a “zero-tolerance” policy against sexual themes involving minors and has pledged to remove AI-generated harmful content from its platform. The company recognizes the distressing increase in such material on the internet and asserts its commitment to keeping teenagers safe through proactive measures, including dedicated teams, technology, and partnerships.

While AI image generator Stable Diffusion was initially developed as a collaboration between academics and various companies, the release of an “open-source” version last year enabled users to circumvent content filters and generate illegal images. Stability AI, the UK company leading the development, stated its strong opposition to the misuse of their products for illegal or immoral purposes and expressed support for law enforcement efforts against such abuse.

The implications of AI technology and its potential risks to privacy, human rights, and safety have sparked concerns. GCHQ’s Counter Child Sexual Abuse (CCSA) Mission Lead highlighted the importance of staying ahead of emerging threats, including AI-generated content, to ensure that offenders have no safe space. The NPCC and other experts warn that the flood of realistic AI-generated images may hamper the identification of actual victims of abuse, posing additional challenges for law enforcement.

Children’s charity NSPCC has called upon tech companies to address this urgent issue, urging them to take action against the misuse of their platforms. The government responded by emphasizing that the forthcoming Online Safety Bill will require companies to proactively combat online child sexual abuse or face substantial fines.

Related Posts

Leave a comment

You must be logged in to post a comment.
Privacy Preferences
When you visit our website, it may store information through your browser from specific services, usually in form of cookies. Here you can change your privacy preferences. Please note that blocking some types of cookies may impact your experience on our website and the services we offer.