Election Disinformation 2022: The Battlefield Shifts Again

After the presidential election in 2020, Public Knowledge reported on how effectively digital platforms had countered the “election misinformation war,” meaning the torrent of election misinformation flowing across their platforms. We concluded, as we did during the COVID-19 pandemic, that “in a high-stakes and high-visibility situation, platforms will develop new standards and solutions to problems they previously claimed were impossible to solve.”

After the presidential election in 2020, Public Knowledge reported on how effectively digital platforms had countered the “election misinformation war,” meaning the torrent of election misinformation flowing across their platforms. We concluded, as we did during the COVID-19 pandemic, that “in a high-stakes and high-visibility situation, platforms will develop new standards and solutions to problems they previously claimed were impossible to solve.” Days after that election, we theorized that the dominant platforms had taken sufficient accountability for the content on their platforms that Americans could safely get to the polls and vote. But we warned of a new risk: that an extended vote-counting period – due in part to COVID-19 precautions that encouraged early and mail-in voting – would allow misinformation to erode trust in the results after the election. 

We published that blog post two months before the insurrection at the U.S. Capitol on January 6. On that day and in the weeks since, we have learned a great deal about the potential for violence stemming directly from election-related networked disinformation. (Misinformation – that is, false information that isn’t deliberately misleading – holds the potential for harm. However, because of its role in elections, in particular, we will focus here on networked disinformation: deliberately false information seeded and spread in the interest of power or profit, in this case to influence elections.) We have seen how election-related conspiracy theories and disinformation, long after the actual election, can be used to justify a record-breaking number of new laws designed to suppress voting or distort the electoral process, and to further the campaigns of election-denying political candidates seeking to take over the administration of elections. We’ve also seen harassment, intimidation, and threats directed at the people – including volunteers – that administer U.S. elections. 

This was the context for our implementation of a weekly tracking system to monitor platform efforts surrounding the midterm elections. It was designed to monitor the platforms’ own public policy pages and announcements about their strategies; researchers’ reports about their effectiveness; and news reports about how platforms worked to mitigate election-related disinformation. Would the platforms heed these learnings from 2020 and step up their efforts for the 2022 midterm elections? What did we see from the platforms – and other media – this time around?

What we found were some of the same themes we had found in our reporting on COVID-19 and the 2020 election. That is, most social media platforms need more consultation with stakeholders on policy development; lack clearly articulated, stable policies with clear conditions for action; lack sufficient transparency about their enforcement and results; chronically underinvest in languages other than English; and need to provide greater access to actions and outcomes for researchers. But there was also an acceleration of a theme we found in our more recent reporting on the “information war” in Ukraine: the movement of content to a range of alternative platforms, resulting in more consumer choice, but a fragmented landscape for the war against harmful disinformation. 

Ample Time To Prepare

Whatever the platforms chose to do or not do for 2022, it wasn’t for lack of awareness or time for preparation. As early as the summer of 2022, research and advocacy groups set up their scorecards and tracking systems and databases and published their assessments and recommendations for platforms’ efforts to combat disinformation related to the 2022 elections. Congress convened roundtables and issued reports, concluding that strains of disinformation related to the voting process and election integrity undermined public confidence in our elections and drove an unprecedented wave of threats of violence against election officials. National security agencies warned of disinformation, including from foreign states, that could threaten the security of the American people and result in calls for violence by domestic extremists aimed at democratic institutions, political candidates, party offices, election events, and election workers.

A Stop-and-Start Approach to an Ongoing Risk

One of the first conclusions to come from organizations tracking platform efforts was that despite the higher stakes, and the proof that election-related disinformation could continue well past the relevant elections, several platforms had quietly discontinued their efforts to combat election disinformation soon after the 2020 election. For example, Twitter acknowledged (several times) that it wound down its Civic Integrity Policy in March 2021 and didn’t revive it until August 2022. Facebook also lifted its ban on political advertising in March 2021. Several platforms that didn’t have specific election policies in 2020 – most notably TikTok – “launched” them in August or September of 2022. Even then, YouTube had not communicated a dedicated elections misinformation policy for 2022 (it ultimately added a section to its general misinformation policy).

After these platforms finally communicated their policies for 2022 in August, September, and October, they were roundly criticized by advocates and journalists for simply bringing back the 2020 policies that had proved inadequate to stop the “Big Lie” of a fraudulent election and related calls to violence from propagating on their platforms. Critics pointed out the continuity of the Big Lie narrative from 2020 and the fact that some election deniers were now political candidates, raising the stakes for actual outcomes of the election. 

Chronic Weaknesses in Enforcement Remain

Whatever the platforms’ policies, researchers and journalists continued to point out patchy enforcement – sometimes by design. Granted, it’s become a predictable part of the election cycle for researchers and journalists to point out failures of moderation. For example, new labeling approaches failed to address posts of election candidates claiming the 2020 election was rigged, according to one report. Another described “ongoing gaps” and “systemic failures” in enforcement of policies. In the final weeks of the election cycle, one source assigned the platforms grades for their ability to manage election disinformation, which ranged from “B” (usually for Twitter) to “F” (for TikTok).

While this post does not attempt to grade platforms, we provide some highlights of how their actions demonstrate each of our key themes. 

Twitter

Twitter has played an outsized role in election politics since 2016 because of the high number of journalists, politicians, and influencers using the platform. That makes its policies and practices a matter of particular concern – a concern intensified by Elon Musk’s purchase of the platform on October 27, 2022. Despite concerns that the takeover would create a free-for-all in the weeks leading up to the election, Twitter did not, and still has not, changed the Civic Integrity Policy it introduced in 2018 and “activated” with some new enhancements in August of 2022. (Twitter representatives maintain that its “always-on” policies were still working to address election disinformation before August 22, but that begs the question of why an election-specific policy should be needed at all.) These Civic Integrity Policy enhancements included a new approach to labeling and upgraded “prebunking,” both research-proven disinformation mitigation strategies; plus state-specific event hubs and a dedicated “Explore” tab focused on elections. 

However, as to enforcement, it’s anyone’s guess. For example, there were reports that essential content moderation tools had been frozen after the takeover, but Twitter described this as a risk mitigation strategy that did not significantly impact enforcement of its rules. There was a spate of racist and antisemitic posts immediately upon the close of the sale, the result of orchestrated accounts testing the boundaries of Twitter’s rules (and quickly removed). Rapid-fire layoffs and resignations, including at the highest levels of safety and integrity teams and among contractors, may have impacted content moderation, but given that both internal and external communications teams have mostly left the building, it’s difficult to more than speculate about what’s going on behind the scenes. We do know that former President Trump, a regular election denier, has been reinstated (though he hasn’t yet posted, potentially due to agreements with Truth Social funders), and that the promised “content moderation council” to make that decision and govern any other change in policy has failed to appear. It remains to be seen whether, as one highly respected former Twitter executive postulated, “the moderating influence of advertisers, regulators and…app stores” will impact policy or enforcement under Twitter’s new owner. 

Meta

Meta continued its approach of allowing information that is “newsworthy and in the public interest” to remain even if it violates Meta’s community standards (though it may label such content), and of exempting most politicians from fact-checking. Meta noted that labels would be used in a more “targeted and strategic way” (i.e., less often) based on findings that users felt they were overused in 2020. It also continued to favor strategies that leaned toward free expression. These include “authenticity and transparency” (rather than restrictions on content) for political ads; elevating sources of authoritative information (rather than limiting the spread of disinformation); labeling; fact-checking; and user controls. These strategies, plus changes in Meta’s organization, including dispersal of much of its election integrity team and dismantling of the CrowdTangle (note: paywall) social analytics tool used by researchers to track movements of social content in real time, were also seen as a shift in emphasis away from election integrity. 

YouTube

YouTube, which kicks off its election misinformation policy by emphasizing its tight focus on content with “serious risk of egregious harm” and “real-world harm,” also focuses on a strategy of “raising up” other information – including authoritative sources, fact-checks, labels, and information panels – to offset disinformation. Of the major platforms, YouTube was among the ones with the most change – maybe because, in general, the platform’s efforts were seen as among the least effective in 2020

TikTok

TikTok was still primarily an entertainment platform for younger people during the 2020 elections, but now its huge reach; short-form video content; powerful but opaque algorithms; and sometimes-suspect Chinese ownership has made it a powerful incubator of false information. For 2022, TikTok continued its emphasis on paid political content, fact-checking, labels, and its election information portal. However, the platform provides even less transparency, and fewer tools, for researchers to understand the enforcement or outcomes of its policies. In one report, TikTok failed to catch 90% of ads featuring false and misleading messages about elections. 

General Observations

Across platforms, we also saw continued under-resourcing of content moderation in languages other than English. Meta may have made the most progress by extending its fact-checking and media literacy efforts in Spanish, but disregarded its base of users in virtually every other language. Civil rights and advocacy organizations maintain that moderators still lack the cultural context and understanding of lived experience needed to mitigate disinformation efforts that are often targeted at distinct communities.

And there was the same lack of transparency and clarity from 2020 to 2022 about what the platforms were experiencing and what they were doing about it. In their reporting, we only still get the numerator of platforms’ content moderation efforts: dollars spent on content moderation…of how many in total? Accounts addressed…of how many discovered? Posts removed…of how many that are still spreading harmful content? Unless we can understand what proportion of the problem is being addressed, and how, we can’t assume the platforms’ efforts are as successful as they claim.

On Dominant Platforms, a Skirmish More Than a War

All that said, strains of disinformation on the dominant platforms, based on reports from researchers, were muted relative to 2020. There are multiple reasons for this. First, citizens are more familiar – and some are fed up – with the themes of election disinformation and they were more skeptical and more wary of what they passed along during the midterms. Offline election officials, trusted community sources, and online misinformation experts did a better job pushing back on misleading narratives. Election outcomes were predicted to favor Republicans, who hesitated to claim election fraud while their candidates were ahead (this became the basis of a particularly popular Saturday Night Live sketch focusing on Kari Lake’s daily about-faces on the topic). And purveyors of disinformation either expected or experienced mitigation efforts from the dominant platforms. 

New Fronts in the Misinformation War

But these purveyors of disinformation also moved on. One of the trends from the 2020 election that accelerated in 2022 was the movement of content to alternative platforms, many of which have less robust policies, fewer resources for enforcement, or explicit “anything goes” approaches to content moderation. (As noted above, this was also a theme in our reporting on the information war in Ukraine.) Some of these “alt” platforms exist for the sole purpose of providing voice for “alternative” – including extreme – views, yet they are a news source for a growing subset of Americans. Gab, Truth Social, Gettr, Rumble, and especially Telegram entered election season with no policies explicitly around election-related content, and some of the most threatening and violent content, muted on the major platforms, migrated there. There were also higher volumes of disinformation in private groups, robocalls, text messages, and – for the first time – podcasts, which came under scrutiny for their hosts “just asking questions” about election integrity. The channel fragmentation and intimacy of podcasts, and the difficulty of analyzing audio content for keywords, may make them a particularly potent vector of disinformation in the future. 

A Complex and Interconnected Battlefield

While our focus at Public Knowledge for this tracking is platform accountability, it has to be said that any effort to mitigate disinformation – about elections or any other topic – is made more challenging in an environment where networked disinformation is deployed as a political strategy, including by the candidates themselves. Disinformation can originate just as easily on fringe websites as in political rallies on the nightly news. We don’t suggest that digital platforms have the only responsibility for managing its impact on our elections, our civility, and our democracy.

Citizens should practice digital literacy and be mindful of what they share, and why. Politicians (despite some platform policies to the contrary) should be held to a higher standard to warrant trust and votes. We would make the case that regulation designed to ensure responsible content from broadcasters should play a role, and watch for the rise of “pink slime” news sites masquerading as local authorities on both sides. And we have made the case that any role for regulation in algorithmic distribution of content should be very carefully considered before adoption by policymakers. All that said, we are still asking the platforms to do their part.