Tech companies agree to work together to fight AI-generated election manipulation
On Friday, leading technology companies agreed to take proactive measures to prevent the misuse of artificial intelligence in influencing democratic elections globally.
CTOs from Adobe, Amazon, Google, IBM, Meta, Microsoft, OpenAI and TikTok gathered at a security conference in Munich to unveil a new voluntary framework for responding to AI-generated deep-fakes that deliberately defraud voters. Twelve other companies – including Elon Musk’s X – have also signed on.
“Everyone recognizes that no technology company, no government, no civil society organization can deal with the advent of this technology and its potential malicious use alone,” said Nick Clegg, Meta’s director of global affairs. The parent company of Facebook and Instagram in an interview before the summit.
The agreement is largely symbolic, but it targets increasingly realistic AI-generated images, sounds, and videos “that fraudulently falsify or alter the appearance, voice, or actions of political candidates, election officials, and other key stakeholders in democratic elections, or that provide voters with false information about when, where and how they can vote legally.”
The companies are not committed to prohibiting or removing deep counterfeits. Instead, the agreement lays out the methods they use to try to detect and flag fraudulent AI content when it’s created or shared on their platforms. It says the companies will share best practices with each other and provide “swift and proportionate responses” when content begins to spread.
The vagueness of the commitments and the lack of binding requirements likely helped win over a range of businesses, but may disappoint pro-democracy activists and watchdogs seeking stronger guarantees.
“The language is not quite as strong as one might expect,” said Rachel Orey, senior associate director of the Bipartisan Policy Center’s Elections Project. “I think we should give credit where credit is due and recognize that corporations have a vested interest in not having their tools used to undermine free and fair elections. It’s voluntary, though, and we’ll be watching to see if they comply.”
Clegg said every company rightly has its own content policies.
“This is not trying to impose a straitjacket on everyone,” he said. “And anyway, no one in the industry believes that you can deal with a whole new technological paradigm by sweeping things under the rug and trying to play. hit a mole and find anything you think might mislead someone.”
The tech leaders were also joined by several European and US political leaders in Friday’s announcement. European Commission Vice-President Vera Jourova said that while such an agreement could not be comprehensive, “it contains very impressive and positive elements”. He also urged other politicians to take responsibility not to use artificial intelligence tools in a misleading way.
He emphasized the seriousness of the issue and said that “the combination of artificial intelligence that serves the purpose of disinformation and disinformation campaigns could be the end of democracy, not only in EU member states.”
The agreement comes at an annual security meeting in the German city as more than 50 countries are scheduled to hold national elections in 2024. Some have already done so, including Bangladesh, Taiwan, Pakistan and, most recently, Indonesia.
AI-powered attempts to interfere in elections have already begun, such as when US President Joe Biden’s voice-imitating AI calls tried to sway people to vote in the New Hampshire primary last month.
Just days before Slovakia’s election in November, audio recordings created by artificial intelligence showed a liberal candidate discussing plans to raise beer prices and rig the election. Fact checkers rushed to identify them as false, but they were already widely shared as real on social media.
Politicians and campaign committees have also experimented with the technology, from using AI chatbots to communicate with voters to adding AI-generated images to ads.
Friday’s agreement stated that platforms responding to AI-generated deep fakes “will pay attention to context, and in particular to safeguarding educational, documentary, artistic, satirical and political expression.”
It said the companies are focusing on being transparent with users about their policies on misleading AI content and working to educate the public on how to avoid falling for AI fakes.
Many companies have previously said they are securing their own generative AI tools that can manipulate images and audio, while also working to identify and flag AI-generated content so social media users know if what they’re seeing is real. . But most of these proposed solutions have yet to be implemented, and companies have faced pressure from regulators and others to do more.
That pressure has increased in the United States, where Congress has yet to pass laws regulating AI in politics, leaving AI companies largely to fend for themselves. In the absence of federal legislation, many states are considering ways to put safeguards on the use of artificial intelligence in elections and other applications.
The Federal Communications Commission recently upheld AI-generated voice clips in robocalls, but it doesn’t cover audio deepfakes when they circulate on social media or in campaign ads.
Disinformation experts warn that while AI deep fakes are of particular concern because they could fly under the radar and influence voters this year, cheaper and simpler forms of disinformation remain a major threat. The agreement also stated this, acknowledging that “traditional manipulations (‘cheap fakes’) can be used for similar purposes.”
Many social media companies already have policies in place to prevent fraudulent messages about electoral processes, whether generated by artificial intelligence or not. For example, Meta says it will remove false information about “voting dates, places, times and methods, voter registration or census participation” and other false messages intended to disrupt someone’s civic engagement.
Jeff Allen, founder of the Integrity Institute and a former data scientist at Facebook, said the deal seems like a “positive step,” but he still wants social media companies to take other basic steps to combat misinformation, such as constructive ones. content recommendation systems that don’t prioritize engagement over everything else.
Lisa Gilbert, vice president of advocacy group Public Citizen, argued Friday that the deal is “not enough” and that AI companies should “hold technology” such as hyper-realistic text-to-video generators “until there are significant and adequate safeguards in place to help us avoid many of the potential problems.”
In addition to the major platforms behind Friday’s deal, other signatories include chatbot developers Anthropic and Inflection AI; voice-clone startup ElevenLabs; chip designer Arm Holdings; security companies McAfee and TrendMicro; and Stability AI, known for its Stable Diffusion image generator.
Notably missing from the deal is another popular AI image generator, Midjourney. The San Francisco-based startup did not immediately respond to a request for comment on Friday.
The inclusion of X – which was not mentioned in the previous announcement of the pending deal – was one of the biggest surprises of Friday’s deal. Musk sharply limited content moderation teams after taking over the former Twitter and has described himself as a “free speech absolutist.”
But in a statement Friday, X CEO Linda Yaccarino said “every citizen and business has a responsibility to secure free and fair elections.”
“X is dedicated to doing its part and working with others to combat AI threats while protecting free speech and maximizing transparency,” he said.