Public anger erupts over the use of deepfake technology to create disturbing images of singer Taylor Swift
On Friday, both Taylor Swift fans and politicians voiced their anger over the circulation of AI-generated fake images that gained widespread attention on X and remained accessible on other platforms. A particular image featuring the renowned American artist was viewed a staggering 47 million times on X, previously known as Twitter, before being taken down on Thursday. Reports from US media indicate that the post remained active on the platform for approximately 17 hours.
Celebrity deepfakes aren’t new, but activists and regulators are concerned that easy-to-use tools using generative artificial intelligence (AI) are creating an uncontrollable flood of toxic or harmful content.
But targeting Swift, Spotify’s second most-streamed artist in the world (behind Canadian rapper Drake), could shine a new light on the phenomenon with her legions of fans outraged by the development.
“Taylor Swift’s only ‘silver lining’ is that she probably has enough power to get legislation passed to remove it. You people are sick,” wrote influencer Danisha Carter on X.
Analysts say X is one of the world’s biggest platforms for porn content, as its nudity policies are more lax than Meta-owned Facebook or Instagram.
Apple and Google, the gatekeepers of online content, have accepted this through the guidelines they set for their app stores for iPhone and Android smartphones.
X said in a statement that “Posting Non-Consensual Nudity (NCN) images is strictly prohibited at X and we have zero tolerance for such content.”
The Elon Musk-owned platform said it “actively removed all identified images and will take appropriate action against the accounts responsible for posting them.”
It was also “closely monitoring the situation to ensure that any new violations are dealt with immediately and the content is removed.”
Representatives for Swift did not immediately respond to a request for comment.
“Easier and cheaper”
“What has happened to Taylor Swift is nothing new. For years, women have been subjected to deep counterfeiting without their consent,” said Yvette Clarke, a Democratic congresswoman from New York who has sponsored legislation to combat deep counterfeiting.
“And with advances in artificial intelligence, it’s easier and cheaper to create deep fakes,” he added.
Republican Congressman Tom Keane warned that “AI technology is advancing faster than the necessary safeguards. Whether the victim is Taylor Swift or any young person elsewhere in our country, we must create safeguards to combat this alarming trend.”
Many well-publicized cases of deepfake audio and video have targeted politicians or celebrities, with women being by far the biggest targets, with graphic, sexual images easily found on the Internet.
Software for creating images is widely available online.
According to a study cited by Wired magazine, 113,000 deepfake videos were uploaded to the most popular porn sites in the first nine months of 2023.
And in 2019, a study by a startup company showed that 96 percent of deepfake videos on the Internet were pornographic.