A boom in deepfake porn is outpacing US and European efforts to regulate the technology. (AFP)AI 

Google and Microsoft Take Steps to Enhance AI-Generated Fake Porn

Fans of Kaitlyn Siragusa, a well-known internet personality called Amouranth, can choose to subscribe to her channel on Amazon.com Inc.’s Twitch for $5 per month to watch her play video games. Alternatively, if they wish to view her adult content, they can subscribe to her explicit OnlyFans page for $15 per month.

And when they want to watch him do things he doesn’t and has never done for free, they search Google for so-called “deepfakes” — videos made with artificial intelligence that create a realistic simulation of a sexual act. the face of a real woman.

Siragusa, who is often targeted by deep-pocketed counterfeiters, said that every time his staff finds something new on the search engine, they file a complaint with Google and fill out a form asking for a specific link to be removed, which takes time and energy. “The problem,” Siragusa said, “is the constant struggle.”

The recent AI boom has seen an increase in the creation of non-consensual pornographic deepfakes, with the number of videos increasing ninefold since 2019, according to research by independent analyst Genevieve Oh. According to Ohi’s analysis, nearly 150,000 videos with a total of 3.8 billion views appeared on 30 sites in May 2023. Some sites offer libraries of deep fake programming, with the faces of celebrities like Emma Watson or Taylor Swift grafted onto the bodies of porn performers. Others offer paying customers the opportunity to “undress” women they know, such as classmates or co-workers.

Some of the biggest names in tech, including Alphabet Inc’s Google, Amazon, X and Microsoft Corp., own tools and platforms that support the recent rise of deep fake porn. For example, Google is the main driver of traffic to widely used deepfake sites, while users of X, formerly known as Twitter, regularly circulate deepfake content. Amazon, Cloudflare and Microsoft’s GitHub provide essential hosting services for these sites.

There are no easy solutions for subjects of deepfake porn who want to hold someone accountable for financial or emotional harm. No federal law currently criminalizes the non-consensual creation or distribution of deepfake porn in the United States. According to WilmerHale LLP attorney Matthew Ferraro, in recent years 13 states have passed legislation addressing such content, resulting in many civil and criminal laws that have proven difficult to enforce. According to Ferraro’s research, so far there have been no prosecutions in the United States for the creation of sexualized content generated by artificial intelligence. As a result, victims like Siragusa are mostly left to their own devices.

“People are always posting new videos,” Siragusa said. “Seeing yourself in porn that you didn’t consent to is gross, on an emotional and human level.”

Recently, however, a growing group of tech policy lawyers, researchers, and victims who oppose the production of deep pornography have begun exploring a new way to solve the problem. To attract users, make money, and continue operating, deepfake sites rely on a vast network of technical products and services, many of which are provided by large publicly traded companies. While such transactional, online services are generally well protected legally in the United States, opponents of the deep counterfeiting industry see its dependence on the services of press-sensitive tech giants as a potential vulnerability. Increasingly, they are appealing directly to tech companies — and publicly pressuring them — to remove harmful AI-generated content and remove the platform.

“Industry needs to take the lead and do some self-governance,” said Brandie Nonnecke, founding director of the CITRIS Policy Lab, which specializes in technology policy. Along with other researchers of deep forgery, Nonnecke has argued that it should be checked whether a person has authorized the use of their face or given rights to their name and likeness.

He said the victims’ best hope for justice is for tech companies to “grow a conscience.”

Among other things, activists want search engines and social media networks to do more to curb the spread of deep fakes. Currently, any internet user who types the name of a famous woman into a Google search alongside the word “deepfake” may be shown dozens of links to deepfake sites. Between July 2020 and July 2023, monthly traffic to the top 20 deepfake sites grew by 285%, according to data from web analytics firm Samanweb, with Google being the single largest driver of traffic. In July, search engines drove 248,000 visits each day to the most popular site, Mrdeepfakes.com – and a total of 25.2 million visits to the top five sites. LikeWeb estimates that Google search accounts for 79 percent of global search traffic.

Nonnecke said Google should do more “due diligence to create an environment where if someone searches for something horrible, the horrible results don’t immediately appear in the feed.” Siragusa, for his part, said Google should “ban deepfake search results” entirely.

In response, Google said that like any search engine, it indexes content on the web. “But we actively design our rating systems to avoid shocking people with unexpected harmful or explicit content they don’t want to see,” spokesman Ned Adriance said. The company said it has developed safeguards to help people affected by unintended fake pornography, including allowing people to request the removal of pages about them that contain content.

“As this situation evolves, we are actively working to increase safeguards to protect people,” Adriance said.

Activists would also like social media to do more. X already has policies that prohibit synthetic and manipulated media. Still, such content regularly circulates among its users. Dataminr, a company that monitors social media for breaking news, tweets dozens of times every day three hashtags for deeply fake videos and photos. Between the first quarter of 2023 and the second quarter of 2023, the number of tweets from the eight hashtags related to this content increased by 25 percent to 31,400 tweets, according to the data.

X did not respond to a request for comment.

Deepfake sites also rely on large technology companies to provide them with basic network infrastructure. Bloomberg estimates that 13 of the top 20 deepfake sites currently use Cloudflare Inc.’s web services to stay online. Amazon.com Inc. provides web hosting services for three popular deepfaking tools listed on several websites, including Deepswap.ai. Previous public pressure campaigns have successfully convinced web hosting companies, including Cloudflare, to stop working with controversial sites ranging from 8Chan to Kiwi Farms. Supporters hope that increased pressure on the companies that host deep porn sites and tools could achieve a similar result.

Cloudflare did not respond to a request for comment. A spokesperson for Amazon Web Services pointed to the company’s terms of service, which prohibit illegal or harmful content, and asked people who see such material to report it to the company.

Recently, the tools used to create deep fakes have grown both more powerful and more accessible. Photorealistic face-changing images can be created on demand using tools such as Stability AI, maker of the Stable Diffusion model. Since the model is open source, any developer can download and modify the code for a myriad of purposes – including creating realistic adult pornography. Online forums for creators of deepfake pornography are full of people exchanging advice on how to create such images using an earlier release of Stability AI’s model.

Emad Mostaque, CEO of Stability AI, called such abuse “deeply deplorable” and referred to the forum as “disgusting.” The stability has put some safeguards in place, he said, including banning the use of porn in the AI model’s training data.

“What bad actors do with open source code cannot be controlled, but there is much more that can be done to identify and criminalize this activity,” Mostaque said by email. “The AI developer community, as well as the infrastructure partners that support this industry, must play a part in reducing the risk of AI abuse and harm.”

Hany Farid, a professor at the University of California, Berkeley, said that manufacturers of technology tools and services should specifically prohibit deep counterfeiting in their terms of use.

“We need to start thinking differently about the responsibilities of the engineers developing the tools in the first place,” Farid said.

Although many of the apps recommended by deep porn site creators and users for creating deep porn are web-based, some are readily available on mobile stores run by Apple Inc and Alphabet Inc’s Google. Four of these mobile apps have received between 1 and 100 million downloads on the Google Play Store. One, FaceMagic, has served ads on porn sites, according to a report in VICE magazine.

Deepfakes researcher Henry Ajder said apps often used to target women online are often harmlessly marketed as tools for AI image animation or photo enhancement. “There’s a broad trend that the easy-to-use tools on the phone are directly related to more individuals, the everyday woman,” she said.

FaceMagic did not respond to a request for comment. Apple said that it tries to ensure the trust and safety of its users, and that according to its guidelines, services that are used primarily for the consumption or distribution of pornographic content are strictly prohibited from its app store. Google said apps that attempt to threaten or sexually exploit people do not comply with its developer policies.

Mrdeepfakes.com users recommend DeepFaceLab, an AI-powered creation of non-consensual pornographic content hosted on Microsoft Inc.’s GitHub. The cloud-based software development platform also currently offers several other tools that are often recommended on deepfake sites and forums, including one that until mid-August showed a woman naked from the chest up with her face swapped with another woman’s face. This app has received nearly 20,000 “stars” on GitHub. Its developers removed the video and suspended the project this month after Bloomberg received comment.

A GitHub spokesperson said the company condemns “the use of GitHub to post sexually obscene content,” and the company’s user policies prohibit such behavior. The spokesperson added that the company conducts “some proactive screening of such content in addition to actively investigating reports of abuse” and that GitHub will take action “if content violates our terms.”

Bloomberg analyzed hundreds of crypto wallets linked to deep counterfeiters who apparently make money by selling access to video libraries, donating or charging customers for customized content. These wallets regularly receive hundred dollar transactions, possibly from paying customers. Forum users who create deep counterfeits recommend web-based tools that accept payments through mainstream processors, including PayPal Holdings Inc., Mastercard Inc. and Visa Inc. — another potential pressure point for activists who want to stem the tide of deep counterfeits.

MasterCard spokesman Seth Eisen said the company’s standards do not allow non-consensual activity, including such deepfake content. Spokesmen for PayPal and Visa did not comment.

Until mid-August, the Patreon membership platform supported payment for one of the largest nudification tools, accepting over $12,500 per month from Patreon subscribers. Patreon suspended the account after Bloomberg received the comment.

Patreon spokesman Laurent Crenshaw said the company has “zero tolerance for pages that contain non-consensual intimate images, as well as pages that encourage others to create non-consensual intimate images.” Crenshaw added that the company is reviewing its policy “as artificial intelligence continues to disrupt many areas of the creative economy.”

Attorney Carrie Goldberg, who specializes in cases involving the sharing of sexual material in part without consent, said technology platforms ultimately control the impact deep porn has on its victims.

“As technology has affected every aspect of our lives, we’ve simultaneously made it harder to hold anyone accountable when the same technology harms us,” Goldberg said.

Related posts

Leave a Comment