The wave of coronavirus (COVID-19)-related content has become a high-stakes test for social media platforms’ abilities to fight misinformation. False recommendations about how to avoid contracting the virus or what measures infected people should take to avoid spreading it have the potential to cause more sickness and death from a pandemic that has already taken thousands of lives worldwide.
According to data from social media analytics platform Sprinklr, there were more than 19 million mentions related to COVID-19 across social media, blogs and online news sites worldwide on March 11. For context, mentions of US President Donald Trump on the same day came in at roughly 4 million. Many of the COVID-19 mentions likely came from legitimate sources, but given the novelty of the disease and the fast-changing nature of related news, it’s safe to assume that a large portion was inaccurate or outdated.
The current battle against misinformation on most social media platforms is primarily concentrated on so-called “bad actors” that deliberately spread lies and misleading information, sometimes for political gain. Facebook, for example, uses an automated system to serve potentially inaccurate content to third-party fact-checkers who then identify, review and rate inaccurate stories so that their distribution can be reduced. It’s a resource-heavy and time-consuming process, and questions about its effectiveness were raised before the coronavirus conversation exploded on social media.
Platforms like Twitter and Facebook were also among the earliest sources of accurate COVID-19 information. But since average citizens, celebrities, politicians and others use social platforms to share their coronavirus experiences, air grievances and simply kill time while self-isolating, important health and safety information easily gets drowned out. Many users may be well-meaning but uninformed, and they could be unintentionally spreading inaccurate information.
As a result, social media platforms have taken unprecedented steps to stop the spread of coronavirus-related misinformation. Facebook has provided the World Health Organization (WHO) with as many free ads as they need and blocked ads from brands that may be exploiting the situation by claiming that their products can cure the virus, for example. That’s in addition to increased fact-checking and a pop-up that directs users who search for coronavirus directly to the WHO’s website or a local health authority. Twitter also directs users to local health authorities’ sites like the Centers for Disease Control and Prevention (CDC) in the US.
On Monday, the major social platforms—Facebook, LinkedIn, reddit, Twitter and YouTube—along with Google and Microsoft, issued a joint statement announcing that they had banded together to fight COVID-19-related misinformation. “We’re helping millions of people stay connected while also jointly combating fraud and misinformation about the virus, elevating authoritative content on our platforms, and sharing critical updates in coordination with government healthcare agencies around the world,” the statement read.
The swift and extensive action is to be applauded, but it also raises larger questions about social media’s ability to police platforms outside of a global health emergency. None of the tactics used were necessarily groundbreaking—promoting facts, demoting lies and banning false information are all part of their current strategies against misinformation. But the concerted effort among the platforms shows just how much work it takes to significantly reduce the spread.
That’s not say that all inaccurate stories about COVID-19 were successfully removed or demoted, but it’s clear that legitimate sources of news have been prioritized. (Whether those sources provide consumers with updated information is another question, however.)
Replicating that effort on a longer-term basis would be a significant lift for the platforms, and it simply may not be feasible given the amount of misleading and inaccurate information on many topics that is spread on social media daily.
According to a NPR/PBS, NewsHour/Marist Poll survey published in January 2020, for example, 82% of US adults said it was at least likely that they would read misleading information on social media platforms during the 2020 election year.
There’s also the political factor. It’s easier to rally resources to fight misinformation about a pandemic than it is about political or socially debatable issues. Particularly in this age of “alternative facts,” it’s difficult for the platforms to draw a hard line between facts and lies without appearing partisan. That’s likely one reason why Facebook has stuck by its refusal to fact-check posts and ads from politicians, even as it has expanded its fact-checking programs for other types of content.
It’s probably also why TikTok partnered with the WHO to provide information about COVID-19 to users. On Wednesday, the WHO hosted a live stream on its official TikTok page, during which an expert shared tips on staying safe and preventing the spread of the virus as well as answering questions from viewers in real-time. Since launching in the US, TikTok has tried to distance itself from the legacy social platforms by promoting its focus on fun, lighthearted and irreverent content. Case in point: TikTok banned political ads in early October 2019, before Twitter did the same a few weeks later.
[“source=emarketer”]