Now YouTube videomakers will have to tell if their videos are AI generated. (AFP)News 

Reveal the AI: YouTube Demands Disclosure for Generative Videos!

Video creators on YouTube, the video platform owned by Google’s Alphabet Inc., will soon be obligated to reveal if they have uploaded manipulated or synthetic content that appears authentic, including videos generated using artificial intelligence technology.

The policy update, which will take effect sometime in the new year, could affect videos that use generative AI tools to realistically depict events that never happened, or show people saying or doing something they didn’t actually do. “This is especially important in cases where the content deals with sensitive topics such as elections, ongoing conflicts and public health crises or public officials,” said Jennifer Flannery O’Connor and Emily Moxley, YouTube’s vice presidents of product management at a company. blog post on Tuesday. Creators who repeatedly choose not to disclose when they’ve posted synthetic content could be subject to content removal, suspension from a program that allows them to earn advertising revenue, or other penalties, the company said.

When content is digitally manipulated or created, creators must select the option to display YouTube’s new warning in the video’s description panel. For certain types of content about sensitive topics – such as elections, ongoing conflicts and public health crises – YouTube displays the tag more prominently in the video player itself. The company said it will work with content creators before the practice goes live to make sure they understand the new requirements, and is developing its own tools to spot when the rules are being broken. YouTube also commits to automatically flag content created with its own AI tools for content creators.

Google — which both makes tools that can create generative AI content and owns platforms that can spread such content over a wide area — is facing new pressure to adopt the technology responsibly. Earlier on Tuesday, the company’s head of legal affairs, Kent Walker, published a company blog post outlining Google’s “AI Opportunity Agenda,” a white paper that includes policy recommendations aimed at helping governments around the world think about AI development.

“Responsibility and opportunity are two sides of the same coin,” Walker said in an interview. “It’s important that while we focus on the responsible side of storytelling, we don’t lose excitement or optimism about what this technology can do for people around the world.”

Like other user-generated media services, Google and YouTube have had to crack down on the spread of misinformation on their platforms, including lies about elections and global crises like the Covid-19 pandemic. Google has already begun to grapple with concerns that generative AI could create a new wave of misinformation, announcing in September that it would require “visible” disclosures about AI-generated election ads. Advertisers were told they must include things like “This voice is computer generated” or “This image does not depict actual events” in altered election ads on Google’s platforms. The company also said that YouTube’s community rules, which prohibit digitally manipulated content that could pose a serious risk of public nuisance, already apply to all video content uploaded to the platform. Read more: Requiring disclosure of political ads that use artificial intelligence.

In addition to the new generative AI notifications YouTube plans to add to the video platform, the company said it will allow people to request the removal of AI-generated or synthetic content that simulates an identifiable person using its privacy request process. . A similar option is offered to music partners to request the removal of AI-generated music content that mimics an artist’s singing or rapping voice, YouTube said.

The company said that not all content is automatically removed when a request is made; rather, it would “consider a number of factors in evaluating these requests.” If the removal request refers to a video that contains, for example, parody or satire, or if the person making the request cannot be identified, YouTube may decide to leave the content on its own platform.

Related posts

Leave a Comment