Demand for Misinformation Outweighs Supply?
As the US presidential election approaches in just over a year, candidates and voters are preparing for a surge of AI-generated misinformation, causing concern. Furthermore, numerous research programs aimed at examining and combating misinformation are being forced to close due to allegations of bias.
Based on all of this, I have a prediction: AI-generated misinformation won’t be a big problem in the 2024 campaign. But that’s only because so many other forms of misinformation are already so common.
Economically speaking, the problem with misinformation is demand, not supply. Consider, for example, the view that the 2020 election was stolen from former President Donald Trump. To explain what happened in simple terms, there was a demand for this misinformation, namely from some offended Trump supporters, and there was also a supply, especially from Trump himself. Supply met demand, the matter was central and visceral, and the misinformation has continued to this day.
No one needed an AI-generated fake video of government officials preparing ballots (and really, high-quality videos weren’t possible back then). Even simpler techniques such as image manipulation did not affect fake news. Rather, the critical factor was that many Trump supporters wanted to believe that their candidate had been wronged, and so Trump presented a narrative of victimization. Unfortunately, no evidence, or even pseudo-evidence, was required — and the objective evidence against Trump has not broken his support.
In other words: In many cases, misinformation is fundamentally a product of low technology.
Or consider the story that former President Barack Obama was not born in the United States. It didn’t take off because someone forged a copy of an Indonesian birth certificate. Instead, many people approached the issue wanting to believe that Obama was not a “real American,” some dangerous stuff was thrown their way, and they left. The release of Obama’s US birth certificate didn’t convince them they were wrong.
Lies, misunderstandings, self-deception: There have been too many of them for a long time. Blame China, Russia, social media, mainstream media, whoever. A possibly gullible person has already been inundated with more lies in one day than he can estimate.
Lots of lies simply don’t matter that much – because scarce resources are attention and focus on the demand side. How much does someone want to believe they have been wronged? How much do they hate “the establishment”? What kind of grudge do they hold and against whom or what? And how well can they coordinate with like-minded others, thus forming a kind of disinformation affinity group?
AI should not be expected to exacerbate these problems, at least not through any obvious, first-order effect (obviously, all major societal changes have different consequences through many different channels). Large language patterns may allow people to ask for relatively objective answers.(1)
It is also instructive to look at episodes of “misinformation” that may not have been misinformation. The Covid lab leak hypothesis was initially kept out of mainstream social media, but is now being seriously discussed and may even be true. It survived in part because the supply side of misinformation was so plentiful. Many proponents of the hypothesis were honest truth seekers, but there were also many mischievous troublemakers. They served a useful function in this case, just as short sellers do in the market – even if their motives are not pure.
What do we have for possible solutions? Fact-checking is neither economically sustainable nor journalistically agile enough. “Education” is often suggested as a cure, but it is often the most educated who present, spread and pursue conspiracy theories. The uneducated are more confused by the propaganda than convinced by it.
The only long-term solution is a transparent administration that solves some of the current critical problems and thereby increases social trust. For example, after winning World War II, the US government became more popular and trusted for at least a couple of decades. Good governance today may be more controversial and may not produce results for a while, but it is probably the best option. A more functioning world – whether economically or politically – is probably a more confident world.
Unfortunately, there is no simple way to combat misinformation. AI adds to the problem, but is unlikely to make it significantly worse. The demand side is what decides. Trust is hard to build, but societies that have it enjoy a significant comparative advantage.
Elsewhere in Bloomberg’s opinion:
- What If Artificial Intelligence Makes Us All Stupid?: Jessica Karl
- Regulation of AI is Necessary – and Complicated: Noah Feldman
- Artificial intelligence could make democracy even messier: Tyler Cowen
For more Bloomberg opinions, subscribe to our newsletter.
(1) Today’s LLMs are not completely objective (they are somewhat leftist), but on most substantive issues they do pretty well.
This column does not necessarily reflect the opinion of the editorial board or of Bloomberg LP and its owners.
Tyler Cowen is a Bloomberg Opinion columnist, professor of economics at George Mason University, and host of the Marginal Revolution blog.
One more thing! ReturnByte is now on WhatsApp channels! Follow us by clicking the link to never miss any updates from the world of technology. Click here to join now!