Experts caution about the potential consequences on elections as social media regulations weaken and AI-generated deepfakes become increasingly prevalent.
False election conspiracy theories that fueled the violent attack on the U.S. Capitol nearly three years ago continue to persist on social media and cable news. These theories include claims of suitcases filled with ballots, late-night ballot dumps, and deceased individuals casting votes.
Experts warn that it is likely to be worse in the upcoming presidential elections. The safeguards that last time tried to combat false claims are eroding, while the tools and systems that create and spread them are only getting stronger.
Many Americans, supported by former President Donald Trump, have continued to push the unsupported idea that elections cannot be trusted across the United States. A majority of Republicans (57%) believe that Democrat Joe Biden was not legally elected president.
At the same time, generative AI tools have made it much cheaper and easier to spread the kind of misinformation that can mislead voters and potentially influence elections. And social media companies that once invested heavily in setting the record straight have shifted their priorities.
“I expect a tsunami of misinformation,” said Oren Etzioni, an artificial intelligence expert and professor emeritus at the University of Washington. “I can’t prove it. I hope to be proven wrong. But the ingredients are there, and I’m absolutely terrified.”
AI DEEPFAKES GO MAINSTREAM
Manipulated images and videos of elections are nothing new, but 2024 will be the first US presidential election where advanced AI tools that can produce convincing fakes in seconds are just a few clicks away.
Made-up images, videos and audio clips, known as deepfakes, have begun to make the rounds in experimental presidential election ads. Worse versions can easily spread unlabeled on social media and trick people days before an election, Etzioni said.
“You can see a political candidate like President Biden being rushed to the hospital,” he said. “You can see a candidate saying things he never said. You can see the run on the banks. You could see bombings and violence that never happened.”
High-tech fraud has already affected elections around the world, said Larry Norden, senior director of the Brennan Center for Justice’s elections and government program. Just days before Slovakia’s recent election, audio recordings created by artificial intelligence featured a liberal candidate discussing plans to raise beer prices and rig the election. Fact checkers rushed to identify them as false, but they were shared as real on social media regardless.
These tools can also be used to target specific communities and refine misleading messages about voting. Experts said they can look like convincing text messages, fake announcements about voting processes shared in different languages on WhatsApp, or fake websites rigged to look like official authorities in your area.
When faced with content made to look and sound real, “everything we’ve been conditioned to do through evolution comes into play to make us believe in the fake rather than the actual reality,” said disinformation researcher Kathleen Hall Jamieson. , director of the University of Pennsylvania’s Annenberg Public Policy Center.
Republicans and Democrats in Congress and the Federal Election Commission are exploring measures to regulate the technology, but have not finalized any rules or legislation. After that, states can enact the only restrictions for now on political AI deepfakes.
A few states have passed laws requiring deep fakes to be flagged or banning those that misrepresent candidates. Some social media companies, such as YouTube and Meta, which owns Facebook and Instagram, have adopted AI tagging practices. It remains to be seen whether they can consistently catch violators.
SOCIAL MEDIA PROTECTION THE WOUNDED
A little over a year ago, Elon Musk bought Twitter and began firing its executives, dismantling some of its core features, and reshaping the social media platform into the X it is today.
He has since changed its verification system, leaving public officials vulnerable to impersonators. He has removed the teams that fought misinformation on the platform, leaving the user community to police itself. And he has reinstated the accounts of previously banned conspiracy theorists and extremists.
Many conservatives have applauded the changes, saying Twitter’s earlier attempts at moderation amounted to censorship of their views. But democracy advocates argue that the takeover has turned a previously flawed but useful resource for news and election information into a largely unregulated echo chamber that amplifies hate speech and misinformation.
Twitter used to be one of the most “accountable” platforms, showing a willingness to test features that might reduce misinformation even at the expense of engagement, said Jesse Lehrich, founder of Accountable Tech, a nonprofit watchdog group.
“Obviously now they’re exactly at the other end of the spectrum,” he said, adding that he believes the company’s changes have given other platforms a shield to relax their own practices. X did not respond to emailed questions from The Associated Press, sending only an automated response.
By 2024, X, Meta and YouTube will have combined to remove 17 policies that protect against hate and misinformation, according to Free Press, a nonprofit that advocates for civil rights in technology and media.
In June, YouTube announced that while it will continue to regulate content misleading about current or future elections, it will stop removing content that falsely claims that the 2020 election or other previous US elections are “extensive frauds, errors, or distractions.” The forum said the policy was an attempt to protect the ability to “openly discuss political ideas, even those that are controversial or based on disproved assumptions”.
Lehrich said that while tech companies want to avoid removing misleading content, “there are many content-neutral ways” platforms can reduce the spread of disinformation, from flagging months-old articles to making it harder to share content without verifying it. first.
X, Meta, and YouTube have also laid off thousands of employees and contractors since 2020, some of whom have worked as content moderators.
The shrinking of such groups, which many blame on political pressure, “sets the stage for things to be worse in 2024 than in 2020,” said disinformation expert Kate Starbird of the University of Washington.
Meta explains on its website that it has around 40,000 people dedicated to security and maintains “the largest independent fact-checking network of any platform”. It also often destroys networks of fake social media accounts that aim to sow discord and mistrust.
“No technology company does or invests more in securing elections online than Meta — not just during election seasons, but at all times,” the release said.
YouTube spokeswoman Ivy Choi said the platform is “heavily invested” in connecting people to high-quality YouTube content, including for the election. He pointed to the platform’s recommendation and information panels, which provide users with reliable election news, and said the platform removes content that misleads voters when voting or encourages interference in the democratic process.
The rise of TikTok and other, less regulated platforms such as Telegram, Truth Social and Gabi has also created more information silos online where unsubstantiated claims can spread. Some apps that are particularly popular among people of color and immigrants, such as WhatsApp and WeChat, rely on private conversations, making it difficult for outside groups to see any misinformation that may be spreading.
“I worry that in 2024 we will see the same recycled, entrenched false narratives, but with more sophisticated tactics,” said Roberta Braga, founder and director of the Digital Democracy Institute of the Americas. “But on the positive side, I hope there’s more social flexibility for these things.”
TRUMP AUTHOR
Trump’s front-runner position in the Republican presidential primaries is on the minds of disinformation researchers, who fear it will exacerbate election misinformation and potentially lead to election surveillance or violence.
The former president continues to falsely claim to have won the 2020 election.
“Donald Trump has clearly embraced and fueled false claims of election fraud in the past,” Starbird said. “We can expect that he may continue to use it to motivate his base.”
Without evidence, Trump has already prepared his supporters to expect fraud in the 2024 election, urging them to step in to “guard the vote” to prevent vote rigging in various Democratic cities. Trump has long suggested the election would be rigged if he doesn’t win, and did so before the 2016 and 2020 elections.
This continued erosion of voter confidence in democracy can lead to violence, said Bret Schafer of the nonpartisan Alliance for Securing Democracy, which tracks disinformation.
“If people end up not trusting the information about elections, then democracy just stops working,” he said. “If the misinformation or disinformation campaign is effective enough that a large enough number of Americans don’t believe the results reflect what actually happened, January 6 is likely to look like a warm-up event.”
SELECTIVE CHEMICALS RESPONSE
Election officials have spent the years since 2020 preparing for an expected surge in election denial stories. They have sent teams to explain voting processes, hired outside groups to monitor misinformation as it emerges, and stepped up physical security at counting centers.
In Colorado, Secretary of State Jena Griswold said informative paid social media and TV campaigns that humanize election workers have helped inoculate voters against misinformation.
“It’s an uphill battle, but we have to be proactive,” he said. “Misinformation is one of the greatest threats to American democracy we see today.”
Minnesota Secretary of State Steve Simon’s office is leading #TrustedInfo2024, a new online public education initiative by the National Association of Secretaries of State to promote election officials as a trusted source of election information in 2024.
His office also schedules meetings with county and city election officials and updates a “Fact and Fiction” information page on its website when false claims emerge. Minnesota’s new law protects election workers from threats and harassment, prevents people from knowingly spreading false information before an election, and criminalizes people who share false images without consent to harm a political candidate or influence an election.
“We’re hoping for the best, but planning for the worst through these layers of protection,” Simon said.
In rural Wisconsin, north of Green Bay, Oconto County Clerk Kim Pytleski has been traveling the area giving talks and presentations to small groups about voting and elections to build voter confidence. The county also organizes public equipment tests so that residents can follow the process.
“Being able to speak directly with election officials makes all the difference,” he said. “Being able to see that there are real people behind these processes who are committed to their work and want to do a good job helps people understand that we are here to serve them.”