50 State Attorneys General Ask Congress to Assist in Combatting AI-Generated Child Sexual Abuse Material
According to AP, the attorneys general of all 50 states have united to issue an open letter to Congress, urging them to implement stronger safeguards against AI-assisted child sexual abuse imagery. The letter specifically requests lawmakers to form a specialized commission that will thoroughly examine the various ways AI can be utilized to exploit children.
The letter, sent to the Republican and Democratic leaders of the House and Senate, also urges politicians to expand existing restrictions on child sexual abuse material to specifically include images and videos created by artificial intelligence. This technology is extremely new, and as such, there is nothing in the books yet that specifically places the AI-generated images in the same category as other child sexual abuse material.
“We are in a race against time to protect our nation’s children from the dangers of artificial intelligence,” prosecutors wrote in the letter. “The city’s Proverbs walls have indeed already been breached. Now is the time to act.”
Using image generators such as Dall-E and Midjourney to create child sexual abuse material is not yet a problem, as the software has safeguards to prevent such activity. However, these prosecutors are looking to the future as open-source versions of the software start popping up everywhere, each with their own guardrails or lack thereof. Even OpenAI CEO Sam Altman has stated that AI tools would benefit from government action to reduce risks, though he did not mention child abuse as a potential downside of the technology.
The government tends to move slowly with technology for a number of reasons, as it took several years for Congress to take the threat of online child abusers seriously in the days of AOL chat rooms and the like. To that end, there’s no immediate indication that Congress is going to draft AI legislation that strictly prohibits generators from creating these kinds ofsuitable images. Even the broad artificial intelligence law of the European Union does not specifically mention any risks to children.
South Carolina Attorney General Alan Wilson organized a letter-writing campaign and has encouraged his colleagues to examine state statutes to determine whether the laws have kept up with the novelty of this new technology.
Wilson cautions against deeply fake content featuring a real child from a photo or video. This would not be child abuse in the usual sense, says Wilson, but would depict abuse and “deface” and “exploit” the child from the original image. He continues that “our laws don’t necessarily address the virtual nature of a situation like this.”
The technology could also be used to invent imaginary children to extract from a library of information to produce sexual exploitation materials. Wilson says this would create “a demand for an industry that exploits children” as an argument against the idea that it doesn’t actually harm anyone.
Although the idea of deepfaking child sexual abuse is fairly new, the Technology Industry has been very aware of deepfake pornographic content and has taken steps to prevent it. In February, Meta, OnlyFans and Pornhub started using an online tool called Take It Down, which allows teenagers to report explicit pictures and videos of themselves from the Internet. This tool is used for regular images and AI-generated content.