Lawmakers ask questions to social media platforms about their actions on AI-generated political deepfakes (Pexels)AI 

Lawmakers Investigate Meta and X for Failing to Establish Regulations Against AI-Generated Political Deepfakes

Artificial intelligence-generated deepfakes are currently gaining significant attention for their ability to create uncanny scenarios involving celebrities. Examples include Tom Hanks promoting a dental plan, Pope Francis donning a fashionable puffer jacket, and U.S. Sen. Rand Paul casually sitting on the Capitol steps in a red bathrobe.

But what will happen next year before the US presidential election?

Google was the first major tech company to say it would set new labels for fraudulent political ads created by artificial intelligence that could falsify a candidate’s voice or actions. Now, some US lawmakers are calling social media platforms X, Facebook and Instagram to explain why they won’t do the same.

Two Democratic members of Congress sent a letter Thursday to Meta CEO Mark Zuckerberg and X CEO Linda Yaccarino expressing “serious concern” about AI-generated political ads appearing on their platforms and asking each to explain any rules they have put in place to curb the situation. harm free and fair elections.

“They are the two biggest platforms, and voters deserve to know what guardrails are being built,” said Minnesota US Senator Amy Klobuchar in an interview with The Associated Press. “We just ask them, ‘Can’t you do this? Why don’t you do this? It’s clearly technically possible.’

A letter to leaders from Klobuchar and U.S. Rep. Yvette Clarke of New York warned: “With the 2024 election fast approaching, a lack of transparency about this type of content in political ads could lead to a dangerous flood of election-related misinformation and disinformation across your various platforms — where voters often turn to learn about candidates and issues. “

X, formerly Twitter, and Meta, the parent company of Facebook and Instagram, did not immediately respond to requests for comment Thursday. Clarke and Klobuchar asked the leaders to respond to their questions by October 27.

Social media companies are under pressure as both lawmakers help guide regulation of AI-generated political ads. A bill introduced by Clarke earlier this year would change federal election law to require election ads to include stickers when they contain images or videos generated by artificial intelligence.

“I think people have a first amendment right to put whatever content they put on social media platforms,” Clarke said in an interview Thursday. “I’m just saying you need to make sure you put a disclaimer and make sure the American people are aware that it’s fake.”

For Klobuchar, who is sponsoring legislation in the Senate that he aims to pass before the end of the year, “that’s like the bare minimum” of what’s needed. Meanwhile, both lawmakers said they hope the big platforms will take the lead on their own, especially given the disarray that has left the House without an elected speaker.

Google has already said that starting in mid-November, it will require a clear disclaimer for any AI-generated election ads that change people or events on YouTube and other Google products. Google’s policy is valid both in the United States and in other countries where the company approves election ads. Meta, the parent of Facebook and Instagram, does not have a specific rule for AI-generated political ads, but it does have a policy that limits the use of “fake, manipulated or modified” audio and images to false information.

A more recent bipartisan Senate bill, backed by Klobuchar, Republican Sen. Josh Hawley of Missouri and others, would go further in banning “materially deceptive” deepfakes about federal candidates, except for parody and satire.

AI-generated ads are already part of the 2024 election, including an ad run by the Republican National Committee in April that aimed to show the future of the United States if President Joe Biden is re-elected. It used fake but realistic photographs showing boarded up facades, armored military patrols in the streets and waves of panicked migrants.

Klobuchar said such an ad would likely be banned under the rules of the Senate bill. So would a fake photo of Donald Trump hugging infectious disease expert Dr. Anthony Fauci, which was featured in an attack ad for Trump’s GOP primary opponent and Florida Gov. Ron DeSantis.

As another example, Klobuchar cited a deeply fake video earlier this year that purported to show Democratic Sen. Elizabeth Warren in a TV interview proposing restrictions on voting for Republicans.

“It’s going to be so misleading if, in a presidential race, either the candidate you like or the candidate you don’t actually say things that aren’t true,” said Klobuchar, who is running for president in 2020. “How are you ever going to know the difference?”

Klobuchar, who chairs the Senate Rules and Administration Committee, chaired a Sept. 27 hearing on artificial intelligence and the future of elections that brought witnesses including Minnesota’s secretary of state, a civil rights advocate and some skeptics. Republicans and some of the witnesses they’ve called to testify have been wary of the rules, which are seen as infringing on free speech protections.

Ari Cohn, a lawyer for the think tank TechFreedom, told senators that the forgeries that have appeared so far in the run-up to the 2024 election have drawn “tremendous scrutiny, even derision” and have not played a major role in misleading voters or influencing their behavior. . He asked if new rules were needed.

“Even false speech is protected by the First Amendment,” Cohn said. “Defining truth and falsehood in politics is really the prerogative of voters.”

Some Democrats are also reluctant to support an outright ban on political deepfakes. “I don’t know that it would work, especially when it comes to First Amendment rights and potential lawsuits,” said Clarke, who represents parts of Brooklyn in Congress.

But if passed, his bill would authorize the Federal Election Commission to impose disclaimers on AI-generated election ads, similar to what Google already does on its own.

In August, the FEC took a procedural step to potentially regulate AI-generated deep fakes in political ads, opening a public comment petition asking it to develop rules for misleading images, videos and audio clips.

The public comment period for the petition presented by the advocacy organization Public Citizen ends on October 16.

Related posts

Leave a Comment