A majority of American adults think artificial intelligence tools will increase the spread of false and misleading information in the 2024 presidential election. (AP)News 

Survey Indicates Majority of US Citizens Believe Artificial Intelligence Will Contribute to Misinformation in Upcoming Presidential Election

As 2024 draws near, the alarms have become increasingly urgent: The swift progress of artificial intelligence tools poses a significant risk of magnifying misinformation during the upcoming presidential election to an unprecedented extent.

The majority of US adults feel the same way, according to The Associated Press-NORC Center for Public Affairs Research and the University of Chicago’s Harris School of Public Policy.

The poll found that nearly six in 10 adults (58%) believe AI tools – which can micro-target political audiences, mass produce persuasive messages and produce realistic fake images and videos in seconds – will increase the spread of false and misleading information in next year’s election.

By comparison, 6 percent believe AI will reduce the spread of misinformation, while one-third say it doesn’t matter much.

“Look what happened in 2020 — and it was just social media,” said Rosa Rangel, 66, of Fort Worth, Texas.

Rangel, a Democrat who said he saw a lot of “falsehoods” on social media in 2020, said he believes AI will make things even worse in 2024 — like the pot “finished.”

Only 30% of American adults have used AI chatbots or image generators, and less than half (46%) have heard or read at least somewhat about AI tools. Nevertheless, there is a broad consensus that candidates should not use AI.

When asked if it would be good or bad for 2024 presidential candidates to use AI in certain ways, a clear majority said they would be against creating false or misleading media for political ads (83%), editing or touching. adding photos or videos to political ads (66%), tailoring political ads to individual voters (62%), and answering voter questions via a chatbot (56%).

These sentiments are supported by majorities of Republicans and Democrats who agree that it would be wrong for presidential candidates to create false photos or videos (85% of Republicans and 90% of Democrats) or answer questions from voters (56% of Republicans). and 63% of Democrats).

The bipartisan pessimism toward candidates using artificial intelligence comes after it has already been introduced in Republican presidential primaries.

In April, the Republican National Committee released an ad created entirely by artificial intelligence that purported to show the future of the country if President Joe Biden is re-elected. It used fake but realistic-looking photographs showing boarded-up facades, armored military patrols in the streets and panic-inducing waves of migrants. The ad revealed in fine print that it was created by artificial intelligence.

Ron DeSantis, the Republican governor of Florida, also used AI in his campaign for the GOP nomination. He promoted an ad that used AI-generated images to make it look like former President Donald Trump was hugging Dr. Anthony Fauci, an infectious disease expert who oversaw the country’s response to the COVID-19 pandemic.

Never Back Down, a super PAC supporting DeSanti, used an AI voice-cloning tool to mimic Trump’s voice, making it seem like he was speaking on social media.

“I think they should be campaigning on merit, not on their ability to strike fear into the hearts of voters,” said Andie Near, 42, of Holland, Michigan, who typically votes Democratic.

He has used AI tools to retouch images in his work at the museum, but he said he believed politicians using the technology to mislead would “deepen and exacerbate the impact that even traditional attack ads can have.”

College student Thomas Besgen, a Republican, also disagrees with campaigns that use fake voices or images to make it seem like a candidate said something they never said.

“Morally, it’s wrong,” the 21-year-old from Connecticut said.

Besgen, a mechanical engineering major at the University of Dayton in Ohio, said he supports banning deepfake ads or, if that’s not possible, requiring them to be labeled as AI-generated.

The Federal Election Commission is currently considering a petition to regulate AI-generated deep fakes in political ads ahead of the 2024 election.

Although Besgen is skeptical about the use of artificial intelligence in politics, he said he is excited about its potential for the economy and society. He actively uses AI tools like ChatGPT to help explain historical topics that interest him or brainstorm ideas. He also uses image generators for fun – for example, to imagine what sports stadiums will look like in 100 years.

He said he typically trusts the information he gets from ChatGPT and is likely to use it to learn more about presidential candidates, something only 5% of adults say they are likely to do.

The poll found that Americans are more likely to ask the news media (46%), friends and family (29%) and social media (25%) for information about the presidential election than AI chatbots.

“Whatever answer it gives me, I would take it with a grain of salt,” Besgen said.

Most Americans are equally skeptical of the information spewed out by AI chatbots. Only 5% say they are very or very confident that the information is factual, while 33% are somewhat confident, according to the survey. Most adults (61%) say they are not very or not at all sure about the reliability of the information.

This echoes the warnings of many AI experts against using chatbots to retrieve information. The big AI language models that power chatbots work by repeatedly choosing the most plausible next word in a sentence, which makes them good at mimicking writing styles, but also prone to making things up.

Adults affiliated with both major political parties are generally open to AI regulations. They responded more positively than negatively to various ways to ban or flag AI-generated content that could be mandated by tech companies, the federal government, social media companies or the news media.

About two-thirds favor the government banning AI-generated content that contains false or misleading images of political ads, while a similar number want tech companies to flag all AI-generated content made on their platforms.

Biden launched some federal guidelines for artificial intelligence on Monday when he signed an executive order to guide development of the rapidly advancing technology. The order requires the industry to develop safety and security standards, and requires the Department of Commerce to issue guidelines for tagging and watermarking AI-generated content.

Preventing false or misleading information generated by artificial intelligence during the 2024 presidential election is largely a shared responsibility among Americans. About six in 10 (63%) say much of the responsibility lies with tech companies creating AI tools, but about half place much of the responsibility on news media (53%), social media companies (52%), and the federal government (49%).

Democrats are somewhat more likely than Republicans to say social media companies have a lot of responsibility, but generally agree on the level of responsibility held by tech companies, the news media and the federal government.

The survey of 1,017 adults was conducted October 19-23, 2023, from a sample drawn from NORC’s probability-based AmeriSpeak panel, which was designed to be representative of the U.S. population. The margin of sampling error for all respondents is plus or minus 4.1 percentage points.

O’Brien reports from Providence, Rhode Island. Associated Press writer Linley Sanders in Washington, D.C., contributed to this report.

The Associated Press receives support from several private foundations to improve its coverage of elections and democracy. See more about AP’s democracy initiative here. AP is solely responsible for all content.

Related posts

Leave a Comment