Pedophiles use AI to create images of child sexual abuse | British News

The rise of AI has sparked a huge surge in procedurally generated imagery of child abuse (Picture: Getty)

The rise of AI has sparked a huge surge in procedurally generated imagery of child abuse (Picture: Getty)

Pedophiles are using AI programs to produce lifelike images of child sexual abuse, raising concerns among child safety investigators that they will undermine efforts to find victims and combat real-world abuse.

According to a Washington Post report, the rise of AI technology has sparked a “predatory arms race” on pedophile forums across the dark web.

The creators of the abusive images use software called Stable Diffusion, which was designed to generate images for use in art or graphic design.

But criminals use the software to create their own realistic images of children performing sexual acts and share detailed instructions on how other pedophiles can create their own images.

Pedophiles use child sexual abuse images to create industrial-scale AI copies (Image: Getty)

Pedophiles use child sexual abuse images to create industrial-scale AI copies (Image: Getty)

“Images of children, including content from known victims, are being repurposed for this truly evil edition,” said Rebecca Portnoff, director of data science at nonprofit child safety group Thorn.

“Victim identification is already a needle-in-a-haystack problem with law enforcement trying to find a child in danger,” she told the Post. “The ease of use of these tools represents a significant shift, as does the realism. It just makes everything more of a challenge.”

In the UK, a computer-generated ‘pseudo-image’ depicting child sexual abuse is treated the same as a real image and is illegal to possess, publish or transmit.

Ian Critchley, head of the National Police Chiefs’ Council (NPCC) for child protection, said it was wrong to make the claim, as such “synthetic” images did not depict real children.

He told the BBC the programs could allow pedophiles to “shift the spectrum of offenses from thought to artificial to actual abuse of a living child”.

The emergence of such images also has the potential to undermine efforts to find victims and combat genuine abuse, forcing law enforcement agencies to make extra efforts to investigate whether a photo is real or fake.

According to the publication, AI-generated child sex images could “mess up” the core tracking system that blocks such material from around the web, as it is designed only to capture known images of abuse, rather than detecting newly generated images.

Hacker attacks the internet

The amount of AI-generated imagery makes it much more difficult to identify victims and their perpetrators (Picture: Getty)

Law enforcement officials working to identify bullied children may now be forced to spend time trying to determine if the images are real or AI-generated.

AI tools can also re-victimize any person whose photos of past child abuse are used to train models to create fake images.

Some of the image authors post on a popular Japanese social media platform called Pixiv, which is mostly used by artists sharing manga and anime.

However, since the site is hosted in Japan, where sharing sexualized cartoons and drawings by children is not illegal, creators can share their works through groups and hashtags.

Subscription-based platform Patreon is also used to host the obscene images, with accounts selling AI-generated, photorealistic images of children behind a paywall, with prices varying depending on the type of material requested.

Journalist Octavia Sheepshanks told the BBC her research revealed that users appear to be making images of child abuse on an industrial scale.

“The volume is just huge, so folks [creators] “We aim to take at least 1,000 pictures a month,” she said.

Hacking of payment systems. Security concept for online credit card payments. Hacker in black gloves hacking the system.

Images are often hosted on Japanese social media, where there are no laws on underage sexualized content (Image: Getty)

“Within those groups, which will have 100 members, people will share, ‘Oh, here’s a link to real stuff,'” she added.

“The worst part, I didn’t even know the words.” [the descriptions] as if there were.’

On dark web pedophile forums, users openly discussed strategies for creating explicit photos and bypassing anti-porn filters, including using non-English languages ​​that they believe are less vulnerable to suppression or detection are.

According to WaPo, on a forum with 3,000 members, about 80 percent of respondents to a recent internal survey said they had used or plan to use AI tools to create images of child sexual abuse.

Forum members also discussed ways to create AI-generated selfies and build a fake school-age persona in hopes of gaining children’s trust, the publication reported.

Ms Portnoff said her group has also seen cases where real photographs of abused children have been used to train the AI ​​tool to create new images showing those children in sexual positions.

A spokesman for Patreon said it has a “zero tolerance” policy toward hosting images of child abuse, real or not.

“We already ban AI-generated synthetic material for child exploitation,” it said, describing itself as “very proactive” with dedicated teams, technology and partnerships to “keep teenagers safe.”

A spokesman for Pixiv also said it is committed to resolving this issue and on May 31 banned all photorealistic depictions of sexual content involving minors.

Contact our news team by emailing us at webnews@metro.co.uk.

For more stories like this, Check out our news page.

Justin Scaccy

InternetCloning is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – admin@internetcloning.com. The content will be deleted within 24 hours.

Related Articles

Back to top button