AI disinformation has become a bigger problem which needs to be dealt with. (Pixabay)News 

Experts Claim Human Error is the Primary Cause of Disinformation, Not AI

Efforts to tackle the rise of AI-generated disinformation online are being hindered by public skepticism towards institutions and a lack of ability to identify fake images, videos, and audio clips, according to experts. Lawmakers, fact-checking organizations, and certain tech companies are collaborating to combat this threat.

“Social media and people have made it so that even if we come in and check the facts and say, ‘No, this is fake,’ people say, ‘I don’t care what you say, this fits my worldview.'” said Hany Farid, an expert in deep forgery analysis and a professor at the University of California, Berkeley.

“Why do we live in a world where reality seems so hard to grasp?” he said. “It’s because our politicians, our media and the internet have created distrust.”

Farid spoke about the first episode of the new season of the Bloomberg Originals series AI IRL.

Experts have been warning about the potential for artificial intelligence to accelerate the spread of disinformation for years. The pressure to do something about it increased this year, however, with the introduction of a new set of powerful generative AI tools that make producing visuals and text cheap and easy. In the United States, there are fears that disinformation produced by artificial intelligence may influence the 2024 US presidential election. At the same time, in Europe, according to new laws, the largest social media platforms are required to combat the spread of disinformation on their platforms.

So far, the scope and impact of disinformation produced by artificial intelligence is still unclear, but there is cause for concern. Bloomberg reported last week that misleading AI-generated fake votes of politicians were being circulated online days before a hotly contested vote in Slovakia. Some politicians in the US and Germany have also shared AI-generated images.

Rumman Chowdhury, a fellow at the Berkman Klein Center for Internet & Society at Harvard University and formerly an executive at X, the company formerly known as Twitter, acknowledged that human fallibility is part of the problem of countering disinformation.

“You can have bots, you can have malicious actors,” he said, “but actually a very large amount of information online that is fake is often shared by people who didn’t know any better.”

Chowdhury said internet users are generally more adept at spotting fake text messages because they have been exposed to suspicious emails and social media posts for years. But as artificial intelligence enables more realistic fake images, audio and video, “people need this level of training.”

“If we see a video that looks real — for example, of a bomb hitting the Pentagon — most of us believe it,” said. “If we saw a message and someone said, ‘Hey, a bomb just hit the Pentagon,’ we’re actually more likely to be skeptical because we’ve been trained more on text than on videos and pictures.”

Related posts

Leave a Comment