Stop the AI Fakers: YouTube Takes Action!
In an announcement on Tuesday, YouTube revealed its plans to enable users to request the removal of AI-generated imposters from the platform. Additionally, the company will mandate the inclusion of labels on videos containing lifelike “synthetic” content.
New rules on video footage produced by artificial intelligence will come into effect in the coming months as fears grow about the misuse of the technology to promote scams and misinformation, or even misrepresent people appearing in pornography.
“We’re making it possible to request the removal of AI-generated or other synthetic or manipulated content that simulates an identifiable person, including their face or voice,” YouTube vice presidents of product management Emily Moxley and Jennifer Flannery O’Connor said. blog post.
When evaluating takedown requests, the Alphabet-owned site takes into account whether the videos are parodies and whether the real people depicted can be identified.
YouTube also plans to start requiring content creators to disclose when realistic video content has been made with artificial intelligence, so viewers can be informed by labels.
“This could be an AI-generated video that realistically depicts an event that never happened, or content where someone says or does something they didn’t actually do,” Moxley and O’Connor said in the post.
“This is especially important in cases where the content deals with sensitive topics such as elections, ongoing conflicts and public health crises or public officials.”
Video creators who violate the privacy policy can be removed from YouTube or suspended from its partner program, which distributes advertising revenue according to the platform.
“We’re also introducing the ability for our music partners to request the removal of AI-generated music content that mimics an artist’s unique singing or rapping voice,” Moxley and O’Connor added.
Elsewhere on the Internet, Meta said last week that advertisers will soon have to disclose on their platforms when artificial intelligence or other software is used to create or alter images or audio in political ads.
The requirement will enter into force worldwide on Facebook and Instagram from the beginning of next year, parent company Meta said.
Advertisers must also disclose when AI is used to create completely fake but realistic people or events, according to Meta.
Meta adds ads to ads to let viewers know what they’re seeing or hearing as a product of software tools, the company said.
“The world in 2024 could see a number of authoritarian nation-states seek to interfere in electoral processes,” warned Microsoft general counsel Brad Smith and vice president Teresa Hutson, whose company runs the pioneering generative artificial intelligence platform ChatGPT.
“And they can combine traditional techniques with artificial intelligence and other new technologies to threaten the integrity of election systems.”