The issue of fake news is multifaceted and can encompass various forms such as written content, visuals, and videos.
There are several ways to produce fake news, especially for written articles. A fake news article can be produced by selectively editing facts such as people’s names, dates or statistics. The article can also be entirely made up of events or people.
Fake news articles can also be produced by machine, as advances in artificial intelligence make it particularly easy to produce false information.
Questions like, “Will there be voter fraud during the 2020 US election?” or “Is climate change a hoax?” can actually be verified by analyzing the available data. These questions can be answered right or wrong, but such questions involve false information.
Misinformation and disinformation – or fake news – can have damaging effects on large numbers of people in a short period of time. Although the concept of fake news has existed long before the development of technology, social media has exacerbated the problem.
A 2018 Twitter study found that fake news was retweeted by humans more often than bots and 70 percent more likely to be retweeted than true stories. The same study found that true stories took about six times longer to reach 1,500 people, and while true stories rarely reached more than 1,000 people, popular fake news stories could reach as many as 100,000.
The 2020 US presidential election, COVID-19 vaccines, and climate change have all been disinformation campaigns with serious consequences. It is estimated that misinformation about COVID-19 costs between $50 million and $300 million every day. The price of political misinformation can be civil unrest, violence or even the erosion of people’s trust in democratic institutions.
Detecting false information
Detecting false information can be done with a combination of algorithms, machine learning models and humans. An important question is who is responsible for controlling the spread of misinformation, if not stopping it once it is detected. Only social media companies can truly control the spread of information through their networks.
A particularly simple but effective way to generate misinformation is through selective news articles. Consider, for example, “Ukrainian director and playwright arrested and accused of “justifying terrorism.” This was achieved by replacing “Russian” with “Ukrainian” in the original sentence of the actual news article.
Detecting misinformation online requires a multifaceted approach to curb its growth and spread.
Social media communication can be modeled as networks, where users form points in the network model and communication links between them; a post’s retweet or equivalent reflects a connection between two points. In this network model, disseminators tend to form much more tightly interconnected core-periphery structures than truth-disseminator users.
My research group has developed efficient algorithms for detecting dense structures in communication networks. This information can be further analyzed to detect disinformation campaigns.
Since these algorithms are based solely on the structure of the message, content analysis by algorithms and humans is required to confirm incorrect information.
Detecting manipulated articles requires careful analysis. Our study used a neural network-based approach that combines textual information with an external knowledge base to detect such tampering.
Stopping the spread
Spotting misinformation is only half the battle – decisive action is needed to stop it from spreading. Strategies to combat the spread of false information in social networks include both the intervention of Internet platforms and the launch of counter-campaigns to neutralize fake news campaigns.
Intervention can be hard, such as suspending a user account, or softer measures, such as marking a post as suspicious.
Algorithms and networks powered by artificial intelligence are not 100% reliable. Accidentally tampering with a real item and tampering with a fake item costs.
For this purpose, we designed an intelligent intervention policy that automatically decides whether to intervene based on its predicted truthfulness and popularity.
Fighting fake news
Launching counter-campaigns to minimize or neutralize the effects of disinformation campaigns must take into account the vast differences between truth and fake news in terms of how quickly and widely each spreads.
In addition to these differences, reactions to stories can vary by user, topic, and length of post. Our approach takes all these factors into account and designs an effective counter-campaign strategy that effectively reduces the spread of misinformation.
Recent advances in generative AI, especially those based on large language models such as ChatGPT, make it easier than ever to generate articles at high speed and high volume, adding to the challenge of detecting misinformation and combating its spread at scale and in real time. Our current research continues to address this ongoing challenge with enormous societal impact.