Research Reveals Facebook’s Infrastructure Hinders Its Misinformation Policies
Scientists who have analyzed Facebook’s misinformation policies have found that the social media giant’s core design has hindered its attempts to tackle the rampant spread of false information on the platform.
The platform’s architecture pushed back even as Facebook tweaked its algorithms and removed content and accounts to combat vaccine misinformation, George Washington University researchers found.
No decreased engagement was seen with anti-vaccine content, despite significant efforts by Facebook to remove much of this content during the COVID-19 pandemic, their study, published in the journal Science Advances, found.
Scientists say these results were due to what the platform was originally designed to do — enable community members to connect over shared interests, which include both pro- and anti-vaccine persuasions.
“(Facebook) is designed to allow motivated people to build communities and easily exchange information about any topic,” said David Broniatowski, lead author of the study and assistant professor of engineering management and systems engineering.
“Individuals who are highly motivated to find and share anti-vaccine content will only use the system in the way it was designed to be used, making it difficult to balance these behaviors with public health or other public safety concerns,” Broniatowski said.
In the remaining anti-vaccine content that was not removed from social media, links to off-platform, low-credibility sites and “alternative” social media platforms proliferated, the researchers said.
They found that this remaining content also became more misinformation, containing sensational false claims about vaccine side effects that were often too new to be fact-checked in real time.
Additionally, it was found that anti-vaccine content creators utilize the platform more effectively than pro-vaccine content creators because they effectively coordinate content delivery across pages, groups, and users’ news feeds, even though both groups had extensive page networks.
The study found that the “collateral damage” of having some pro-vaccine content removed as a result of the platform’s practices and the politically charged and polarized general vaccine-related debate.
Broniatowski pointed out that the debate about social media platforms and AI management revolves largely around either content or algorithms.
“To combat misinformation and other online harm effectively, we need to move beyond content and algorithms to design and architecture.
“Removing content or changing algorithms can be ineffective if it doesn’t change what the platform was designed to do. You have to change the architecture if you want to balance (anti-vaccine behavior against public health issues),” Broniatowski said.
Social media platform designers could develop “building codes” for their platforms based on scientific evidence to reduce online harm and ensure user protection, the researchers said.