Will self-regulation be enough in the deepfake age as Meta plans to label AI-generated content on Facebook and Instagram?
In a groundbreaking move, Meta — the parent company of Facebook, Instagram and Threads — announced a new policy aimed at responding to growing concerns about AI-generated content. Following this practice, it will begin labeling AI-generated images as “AI-illustrated” to distinguish them from human-generated content.
Here are the key highlights from Meta’s new policy, which was announced on Tuesday (February 6):
- Introducing “Imagined with AI” labels on photorealistic images created using Meta’s artificial intelligence feature.
- Use of visible tags, invisible watermarks, and embedded metadata in image files to indicate AI involvement in content creation.
- Applying community standards to all content, regardless of its origin, with a focus on detecting and addressing harmful content.
- Collaborate with other industry players in forums such as the Partnership on AI (PAI) to develop common standards for identifying AI-generated content.
- Eligibility of AI-generated content subject to fact-checking by independent partners, with rejected content flagged to provide users with accurate information.
What did Meta say?
Nick Clegg, Meta’s director of global affairs, said in a blog post: “While companies are starting to incorporate signals into their image generators, they haven’t started incorporating them into AI tools that produce audio and video at the same scale, so we’re not yet able to detect these signals and flag this content from other companies. As the industry moves toward this capability, we’re adding a feature that allows people to disclose when they’re sharing AI-generated video or audio so we can add a tag to it.”
“We require people to use this disclosure and flagging tool when they post organic content with photorealistic video or realistic-sounding audio that has been digitally created or altered, and we may impose penalties if they don’t. If we determine that digitally created or altered image, video or audio content creates a particularly high risk of misleading the general public on an important matter, we can add more prominent labeling if necessary so that people have more information and context, he says. added.
Self-regulation and the role of government
The announcement comes amid ongoing discussions between the Ministry of Electronics and IT and industry officials about the regulation of deep counterfeiting. Minister of State Rajeev Chandrasekhar recently said that it may take some time to finalize the regulations.
Meta’s groundbreaking move marks the first time a social media company has taken proactive steps to flag AI-generated content, setting a precedent for the industry. It is not yet known if other tech giants will follow suit.
However, experts believe that whether others implement similar policies or not, government regulation is needed. This is because content creators or other platforms may not follow suit, leaving a fragmented landscape with different approaches. So governments can create clear definitions, address different types of deep fakes (face swapping, voice synthesis, body movement manipulation, and text-based deep fakes) and outline the consequences of abuse.
Governments can establish regulatory bodies or authorize existing ones to investigate and punish criminals. Furthermore, since deep counterfeiting crosses national borders, international cooperation can ensure uniform standards and facilitate cross-border investigation and prosecution.
Nilesh Tribhuvann, founder and managing director of White & Brief, Advocates & Solicitors, said Meta’s initiative was laudable. With recent incidents ranging from financial scams to celebrity exploitation, this measure is timely and necessary.
“[But] government oversight remains essential. Robust legislation and enforcement is necessary to ensure that all social media platforms comply with strict regulations. This proactive approach will not only strengthen user protection, but also promote accountability across the technology industry,” he said.
Arun Prabhu, Partner (Head of Technology & Telecom), Cyril Amarchand Mangaldas, said: “Leading platforms and service providers have developed responsible AI principles that enable labeling and transparency. However, it is common for government regulation and industry standards to work in concert with each other to ensure consumer safety , especially in rapidly developing areas such as artificial intelligence.”