Meta reverses regulations against false information about COVID-19 in numerous nations
Meta is reversing its COVID-19 misinformation regulations for Instagram and Facebook in nations where the pandemic is no longer considered a national emergency. The regulation will no longer be enforced in the United States and a few other regions.
Last July, Meta asked the Supervisory Board for an opinion on the spread of false information after stating that the pandemic had “developed”. It took some time for the oversight board to weigh in, but in April the group suggested that Meta should continue to remove false claims about COVID-19 that are “likely to directly affect the risk of immediate and significant physical harm.” The board also urged the company to “reevaluate” the types of pandemic claims it will eliminate under the policy.
Additionally, the advisory group suggested that Meta prepare before the World Health Organization ends the COVID-19 emergency “to protect freedom of expression and other human rights in these new circumstances.” The WHO removed its COVID-19 emergency designation in May, and Meta has now responded to the monitoring board’s recommendations.
“We are taking a more tailored approach to our policies regarding COVID-19 errors, consistent with government guidance and our current practices. In countries with a COVID-19 public health emergency declaration, we will continue to remove COVID-19-infringing content. Misinformation policies, given the risk of immediate physical harm,” Meta wrote in an updated blog post. “We are consulting with health experts to understand which claims and categories of misinformation may still pose this risk. Our rules on COVID-19 misinformation are no longer in effect globally, as those rules triggered by the global public health emergency declaration have been revoked.”
Soon after the outbreak of the pandemic, social media platforms came under pressure to counter people spreading COVID-19 misinformation, such as inaccurate claims about vaccinations. Many – including Meta, Twitter and YouTube – have established policies to combat COVID-19 hoaxes.
These rules have evolved over time. For example, in May 2021, Meta said it would no longer remove claims that COVID-19 was “man-made”. As the oversight board noted last year, between March 2020 and July 2022, Meta removed 27 million Facebook and Instagram posts that contained the COVID-19 error.
Twitter stopped enforcing its COVID-19 misinformation policy in November, shortly after Elon Musk took over the company and laid off thousands of workers. Meanwhile, YouTube recently updated its misinformation policy to no longer ban videos that include banning the 2020 election.