YouTube is now asking creators to identify AI-generated videos to prevent potential problems.
YouTube looks for AI-generated content on the platform and asks its users and content creators to flag synthetically generated content. Labels for AI content clearly show that the platform is concerned about videos created with AI tools, and there are cases where the video or audio can trick viewers.
YouTube is aware of the need to be clear about the details of the content you see, and seeks transparency from content creators and hopes that the self-tagging mandate is properly followed.
“In Creator Studio, we’re introducing a new tool that requires content creators to disclose to viewers when realistic content—content that a viewer can easily mistake for a real person, place, or event—was made with altered or synthetic media, including creative artificial intelligence.” YouTube highlighted this in a post this week.
The AI tag is attached to regular videos as well as YouTube Shorts videos for viewers. However, YouTube does not require special tags for content that has been edited with beauty filters, special effects such as background blur, and animation-like edits.
AI-generated content has become a concern for these platforms, but the government is clear that entities such as Meta, YouTube and Google need a proactive way to counter the potential misuse of AI to spread misinformation and fake news.
With two major elections in the US and India this year, and AI being used to target people with highly corrupt content, it’s vital that YouTube controls what kind of content is posted. Make sure AI-generated content is tagged correctly. There are other challenges with YouTube content, and they have been difficult to follow. But we hope the platform handles the AI era better or else it could start a major digital war.