Watchdog Urges Action to Prevent AI-Generated Child Sexual Abuse Images from Invading the Internet
On Tuesday, a watchdog agency issued a warning stating that if measures are not taken to regulate artificial intelligence tools responsible for creating deepfake images, the current distressing rise in child sexual abuse images on the internet could escalate significantly.
In a written report, the UK-based Internet Watch Foundation urges governments and technology providers to act quickly before the flood of AI-generated images of child sexual abuse overwhelms law enforcement investigators and vastly expands the pool of potential victims.
“We don’t talk about the harm it can cause,” said Dan Sexton, the watchdog group’s chief technology officer. “This is happening right now and needs to be addressed now.”
In a first of its kind in South Korea, a man was sentenced in September to 2 1/2 years in prison for using artificial intelligence to create 360 virtual images of child abuse, according to the Busan District Court in the country’s southeast. .
In some cases, children use these tools with each other. At a school in southwest Spain, police are investigating teenagers’ alleged use of a phone app to make their fully clothed classmates appear naked in photographs.
The report reveals the dark side of the race to create generative AI systems that allow users to describe in words what they want to produce – from emails to new artwork or videos – and have the system spit it out.
If left unchecked, the flood of images of child sexual abuse could prompt investigators to try to rescue children who turn out to be virtual characters. The perpetrators could also use the images to treat and coerce new victims.
Sexton said IWF analysts found the faces of famous children online, as well as “a huge demand to create more images of children who have already been abused, possibly years ago”.
“They’re taking existing real content and using it to create new content about these victims,” he said. “It’s just incredibly shocking.”
Sexton said his charity, which focuses on combating child sexual abuse online, first began reporting on AI-generated images earlier this year. This led to a study of so-called dark web forums, which are part of the Internet in an encrypted network that can only be accessed through tools that provide anonymity.
IWF analysts found abusers sharing tips and marveling at how easy it was to turn their home computers into factories that create sexualized images of children of all ages. Some also trade and try to profit from images that look more and more real.
“What we’re starting to see is this explosion of content,” Sexton said.
While the IWF report aims to inform the growing problem rather than offer prescriptions, it urges governments to strengthen laws to make it easier to combat AI-induced abuse. It is particularly aimed at the European Union, where there is debate over surveillance measures that could automatically scan messaging apps for images suspected of child sexual abuse, even if the images are not known to law enforcement.
The focus of the group’s work is to prevent past victims of sexual abuse from being abused again by re-sharing their images.
According to the report, technology providers could do more to make it harder to use the products they make in this way, although it’s complicated by the fact that some tools are difficult to put back in the bottle.
Last year, a number of new artificial intelligence image generators were introduced, delighting the public with their ability to conjure original or photorealistic images on command. But most of them are not popular with producers of child sexual exploitation material because they contain mechanisms to prevent it.
Technology providers that have closed AI models with full control over how they are trained and used — such as OpenAI’s image generator DALL-E — have been more successful at preventing abuse, Sexton said.
In contrast, the tool preferred by producers of child sexual abuse images is the open-source Stable Diffusion, developed by the London-based startup Stability AI. When Stable Diffusion burst onto the scene in the summer of 2022, some users quickly learned to use it to create nudity and pornography. Most of the material depicted adults, but it was often unsavory, such as when it was used to create celebrity-inspired nudes.
Later, Stability implemented new filters that block dangerous and inappropriate content, and the license to use Stable’s software includes a prohibition against illegal use.
In a statement released Tuesday, the company said it “strictly prohibits any misuse for illegal or immoral purposes” across all of its platforms. “We strongly support law enforcement action against those who misuse our products for illegal or malicious purposes,” the statement said.
However, users can still use older versions of Stable Diffusion, which is “mostly the software of choice … for people who create explicit content involving children,” said David Thiel, chief technologist at Stanford’s Internet Observatory, another watchdog group studying the problem. .
The IWF report acknowledges that it is difficult to criminalize the AI imaging tools themselves, even those that are “fine-tuned” to produce offensive material.
“You can’t regulate what people do on their computers, in their bedrooms. It’s not possible,” Sexton added. “So how do you get to the point where they can’t use openly available software to create this kind of malicious content?”
Most AI-generated images of child sexual abuse would be considered illegal under existing laws in the US, UK and elsewhere, but it remains to be seen whether law enforcement has the tools to combat them.
A British police official said the report shows the impact already seen by officers trying to identify victims.
“We see children being groomed, we see creators making their own images according to their own instructions, we see AI images being produced for commercial gain – all of which normalize the rape and abuse of real children,” Ian said in a statement. Critchley, director of child protection for the National Council of Chiefs of Police.
The IWF report is timed ahead of a global AI security summit hosted by the UK government next week, which will be attended by high-profile attendees including US Vice President Kamala Harris and technology leaders.
“While this report paints a bleak picture, I am optimistic,” IWF CEO Susie Hargreaves said in a written statement. He said it was important to communicate the reality of the problem “to a wider audience because we need to discuss the darkest aspects of this amazing technology”.