Have you heard? The COVID-19 pandemic has entered a new phase.
No, it’s not the new, more contagious “super strains” of the virus, or the beginning of the vaccine rollout. It’s the shift in focus of the disinformation peddlers, from the virus itself (remember the “infodemic”?) and the 2020 election to the ramping up of lies about the vaccine. The claims range from the idea that the government will force people to receive a vaccination, to the vaccine causing changes in one’s DNA, to the idea that Bill Gates cooked up the pandemic to implant microchips in us all.
And yes, it’s many of the same peddlers. According to NewsGuard, a web browser start-up that assesses the quality of news sites, nearly two-thirds of sites publishing election disinformation have also pushed COVID-19 myths, including about the vaccine. And their motives are the same: divide communities, sow mistrust, undermine faith in institutions, and exert political control. (I’m focusing on content that is intentionally false, so I call it “disinformation” — unless an original source uses “misinformation”, or where the intent is unclear – more here.) Some of the actors pushing disinformation about the vaccine are targeting it at the very groups that are suffering the most from the pandemic, particularly communities of color. Add to those challenges the activation of longstanding anti-vaccination communities, which with varying degrees of sincerity are beginning to actively protest and demonstrate at vaccination sites. The net result is a “parallel pandemic of mistrust” that hinders our response to the actual disease.
This isn’t some abstract, sky-is-falling call to arms. The margin of error in the effort to vaccinate our way back to normalcy in America is razor-thin. For example, while it’s impossible to pinpoint the exact threshold, Dr. Anthony Fauci, director of the National Institute of Allergy and Infectious Diseases, noted in December that 75-80 percent of Americans must be vaccinated by the end of this summer in order to get to “a degree of normality” by the end of 2021 (his more recent estimate was 85 percent). But a Kaiser Family Foundation (KFF) vaccine monitor around the same time found that only 71 percent of those polled said they “definitely or probably would” get a vaccine if it was determined to be safe by scientists and available for free. Political identity had the strongest correlation with vaccine hesitancy in the poll, which may be associated with different news diets. (Per the World Health Organization, vaccine hesitancy, or “the reluctance to get vaccines even when they are available” is one of the top ten threats to global health.) Hesitancy scored highest among Republicans, who cited a lack of trust in the government’s ability to ensure a safe and effective vaccine as well as their belief that many are exaggerating the viral threat. The same trends appeared in a Pew Research Center study taken about the same time: 60 percent of Americans said they “definitely or probably would” get a vaccine, but 50 percent of Republicans said they “probably or definitely would not” seek one.
The impact of misinformation on real-world health behaviors is also becoming clearer. A widely-reported academic study found that a one-point shift upwards in a five-point scale measuring the prevalence of vaccine disinformation on social media is associated with a two-percentage point drop in vaccination coverage. Another study specifically focused on the coronavirus vaccine in the U.S. and the U.K. revealed that recent exposure to misinformation can significantly reduce respondents’ intent to receive the vaccine. This demonstrates that misinformation has real-world implications for our ability to address a serious public health threat. This isn’t just about social media, either: Right-wing outlets like Fox News, Breitbart, Newsmax, and One America News (OAN) Network are also running misleading articles about the vaccines.
It all adds up to this: The quality of information about the COVID-19 vaccine, and the willingness of citizens to believe it, directly impacts our ability to get to the other side of this pandemic. Whether you care about your own health and that of your family and friends, the economic recovery, an end to the health inequities wrought by the pandemic, getting your kids back to school, or whatever “the other side” means to you, you should care about how effectively our news and information ecosystem handles information about the vaccine. (Yes, you: It’s not enough to say you closed your Facebook account or that you don’t believe everything you read online. These harms are collective, like those of secondhand smoke: Others’ willingness to share and act on false information creates health harms for you.)
How Platforms Have Evolved Content Moderation for the Vaccine
Virtually all of the dominant digital platforms have continued to evolve and expand their policies to remove disinformation and elevate reliable sources, building on the experience Public Knowledge reported in regard to the COVID-19 pandemic and the 2020 election.
For example, after insisting for years that false claims about vaccines are better left visible to be debunked, Facebook announced that it would actually begin removing vaccine misinformation that has already been debunked by public health experts, citing the potential for “imminent physical harm.” (One removal, related to a prescription medication in France, was subsequently overturned by the Facebook Oversight Board. The Oversight Board noted that the underlying misinformation and imminent harm rule is “inappropriately vague and inconsistent with international human rights standards.” They recommended that Facebook create a new Community Standard on health misinformation “consolidating and clarifying the existing rules in one place;” increase transparency around how the platform moderates health information; and, most notably for the purposes of this post, adopt less intrusive means of enforcing its health misinformation policies, including by adding context through fact-checking and elevation of authoritative information. Facebook subsequently said it would remove posts with erroneous claims about all vaccines from across its platform, change its search tools to promote authoritative results on vaccine-related information, and make it harder to locate accounts that discourage vaccination.)
In late December, Twitter expanded its policy on COVID-19 misinformation to require people to remove tweets that advance “harmful false or misleading narratives” about COVID-19 vaccinations. Twitter also noted the company would label, place a warning, or append contextual links to tweets that advance unsubstantiated rumors, disputed claims, or out-of-context information about vaccines. YouTube expanded its “strike” policy to include a range of claims related to the vaccine and its origins, ingredients, or efficacy. And Google, through its Google News Initiative, created a COVID-19 Vaccine Media Hub to bring journalists and fact-checkers the latest scientific information about the vaccines. (Note: Pinterest introduced a new search experience that showed content from leading public health institutions in response to vaccine-related searches back in 2019.)
But as a recent letter from prominent Democratic senators to the CEOs of Facebook, Twitter, Google, and YouTube pointed out, the platforms’ enforcement of their policies remains sluggish and spotty. Journalists and researchers continue to find content that violates policies communicated by each platform and highlight gaps in the policies themselves.
We recently outlined a creative policy proposal — a “Superfund for the Internet” — that can help scale the dominant platforms’ efforts to mitigate misinformation while elevating the viability of local journalism. It’s based on an increasing body of research showing that factual rebuttals to misinformation can reduce its spread and even cause people to hold more accurate beliefs. A University of California study from 2018 specifically studied the effects of fact-checking social media vaccine misinformation on attitudes toward vaccines. It found that people who were shown fact-checking labels and linked to a credible source like the World Health Organization were more likely to have a positive view of vaccines than those who saw the misinformation alone. Vaccine skepticism, the type of vaccine misinformation, and political beliefs didn’t affect this outcome. As potential evidence of the results platforms are seeing from fact-checking, Google News Initiative recently funded a research project to investigate how fact-checking can effectively counteract misinformation specifically about COVID-19 vaccines.
Knocking “The Scale Objection” Down to Scale
The platforms have claimed that the sheer scale of disinformation makes it impossible to effectively moderate it. But a growing set of evidence shows the concentration among sources and spreaders of disinformation. This suggests that scalability may not be as challenging as initially thought — at least when targeting specific types of disinformation. As early as March of 2020, Facebook knew that a very small number of influential users were driving the flow of disinformation about the pandemic. One recent study showed that online disinformation about election fraud fell 73 percent after several social media sites suspended former President Trump and his key allies. Another study found that a small number of Facebook pages are “super-spreaders” of COVID-19 vaccine development disinformation. All of them have a history of sharing disinformation about vaccines in general. This indicates that effective enforcement of the platforms’ policies does not have to entail review of every piece of content from every user on the internet. And even a material reduction in the volume and velocity of harmful disinformation would help reduce its harms to society and our democratic institutions.
Many factors will determine how effectively and quickly Americans can be vaccinated to set us on the path of pandemic recovery: availability from manufacturers, mobilization of sites and staff to administer vaccinations, the ability to stay ahead of additional mutant variations of the virus, equity of vaccine distribution and access, and so on. And we would certainly favor more government public health advertising efforts — ideally in local news — focused on addressing vaccine hesitancy.
We also recognize that fact-checking and the resulting actions like down-ranking, labeling, or deleting content do not address the social or political circumstances that cause disinformation to be posted in the first place. But they’re a meaningful start on mitigating the harms it can create.
And this isn’t just about COVID-19. As Public Knowledge pointed out in a recent letter to the incoming Congress, every other administration priority — economic recovery, racial injustice, and climate change — will also be subject to the same patterns of disinformation and manipulation.
We cannot forge solutions to these massive, complex problems without first restoring a “sense of shared reality” — and we cannot rely on the platforms’ discretion or goodwill to do it. Effective technology policy can help solve the multitude of public health, economic, and social threats we face. A “Superfund for the Internet” marks the first step.