Meta’s Oversight Board Struggles to Define Clear Rules for Deepfakes
NEW YORK: Meta’s oversight board has ruled that a Facebook video falsely claiming US President Joe Biden is a pedophile does not violate the company’s current rules, although it considers those rules “inconsistent” and too narrowly focused on content generated by artificial intelligence.
The Meta-funded but independent board took up the Biden video case in October in response to a user complaint about an altered seven-second video of the president posted on Meta’s flagship social network.
Its ruling on Monday is the first to address Meta’s “manipulated media” policy, which bans certain types of curvy videos, amid growing concerns about the potential use of new artificial intelligence technologies to influence this year’s election.
The policy “lacks persuasive justification, is inconsistent and confusing for users, and does not clearly specify the harms it seeks to prevent,” the government said.
The board suggested that Meta update the rule to cover both audio and video content, regardless of whether artificial intelligence was used, and to use tags that identify it as manipulated.
It did not require the policy to apply photos and warned that this could make the policy too difficult to enforce at Meta’s scale.
Meta, which also owns Instagram and WhatsApp, told the government during the review that it planned to update the policy “to respond to the development of new and increasingly realistic artificial intelligence,” according to the ruling.
The company said in a statement Monday that it will review the decision and respond publicly within 60 days.
A clip on Facebook manipulated real footage of Biden exchanging “I Voted” stickers with his granddaughter during the 2022 US midterm elections and kissing her on the cheek.
Versions of the same modified video clip had already started circulating in January 2023, the government said.
In its ruling, the oversight board said Meta was right to exclude the video under its current policy, which bans misleadingly modified videos only if they are generated by artificial intelligence or if they make people appear to be saying words they never said.
The board said that non-AI-edited content “is common and not necessarily less misleading” than content generated by AI tools.
According to it, the policy should also be applied to audio-only content and videos that depict people doing things they never did.
It added that enforcement should consist of adding titles to content, rather than Meta’s current approach of removing posts from its platforms.