“Lies All the Way Down” – Combating 2024 Election Disinformation

As a new election season approaches, the same problems remain.

Over the past few years, Public Knowledge has studied and reported on how digital platforms manage misinformation and disinformation around key world events. Our goals are 1) to provide real-world case studies of how effectively the platforms set and enforce standards for moderating content with the potential for harm, and 2) to propose the implications for public policy regarding platform accountability. Consistent with the values we bring to free expression and content moderation, we’ve chosen to focus on topics that have the potential to impact democratic institutions and systems, including electoral processes and other democratic institutions

As we approach the 2024 U.S. presidential election, we want to understand what we can expect from platforms since our 2022 Election Disinformation report. At that time, we concluded that platforms need more consultation with stakeholders on policy development; lack clearly articulated, stable policies with clear conditions for action; lack sufficient transparency about their enforcement and results; chronically underinvest in languages other than English; and need to provide greater access to their actions and outcomes for researchers. We also noted the greater fragmentation of media – including new social media platforms with free-for-all content moderation policies but also podcasts, robocalls, and text messages designed to carry false election themes. Researchers had begun to put more scrutiny on publishing channels, such as Substack and Reddit, and called for them to articulate more robust policies for election information. Even if they have the best intentions – and we’re not saying they all do – these new channels face a steep trust and safety learning curve. 

Spoiler alert: In general, none of the issues we identified in 2022 have been addressed for 2024, and some new developments will almost certainly make the war on false election narratives harder to win. As a result, we will put forward both some familiar policy solutions as well as a few new ideas designed to mitigate new risks. For example, we already advocate for policies that can break the anticompetitive lock that the dominant legacy platforms have on public discourse, enacting a national privacy law, requiring platforms to increase transparency regarding their algorithms, implementing a more assertive media policy, and creating a digital regulator. We also believe that oversight and regulation ensuring responsible AI development is in the public interest. 

(Note: Networked disinformation – that is, false information deliberately seeded and spread to gain power or profit – is now a widely deployed and particularly dangerous political strategy. So, we will refer to these narratives in that way – as disinformation – rather than as less intentional misinformation. But make no mistake, both disinformation and misinformation have the potential to disenfranchise voters, distort election outcomes, and induce political violence – and both should be addressed in platform content moderation policies and practices.)

The Big Gun Everybody’s Talking About: Generative Artificial Intelligence

One of the biggest, splashiest new factors in the information environment for the elections since we wrote our last report is the advance in artificial intelligence, especially the explosion of focus on generative artificial intelligence. Two things have changed since 2022. First, the technology has vastly improved. Second, it is widely available and easy to access and use, so it is no longer the province of well-funded, technologically savvy operators. Although there have been concerns and warnings about it in the past, this is the year when we can expect AI tools to become mainstream for disinformation.

Public Knowledge has discussed the risks of generative artificial intelligence for disinformation before, including that the technology will increase the number of parties that can create credible disinformation, make disinformation less expensive to create, and make fake or false information harder to detect. Countless civil society groups, academics, researchers, and others are now calling for some degree of regulation. Even OpenAI CEO Sam Altman testified to Congress that his greatest concern is that the technology could be used to spread “one-on-one interactive disinformation” in the run-up to the 2024 election. (The Federal Trade Commission has since opened an investigation into OpenAI over its products making “false, misleading, disparaging, or harmful” statements about people, or the company engaging in practices that may create “reputational harm.”) Even if industry and regulatory leaders were not worried about the threat of this technology for elections, malicious state actors, like Russia, and non-state actors alike, can use machine-generated propaganda to sway public opinion on any variety of topics. 

This isn’t speculative: Generative AI already threatens the integrity of our electoral process. We’ve seen deepfake news clips of real reporters spouting lies and viewed deepfake political advertisements (such as the Republican National Committee’s AI response to President Biden’s re-election announcement). And yes, robocalls are in play. February saw the widely reported use of an AI-generated clone of President Biden’s voice (made in 20 minutes for $1). AI has the potential to generate constantly evolving, precisely targeted messages at an unprecedented scale, making human review basically impossible

Some platforms have announced new policies for this content, in some cases specifically regarding elections. YouTube will generally require creators to disclose when they’ve uploaded manipulated or synthetic content, including video that has been created using generative artificial intelligence. Meta will require political ads running on Facebook and Instagram to disclose if they were created using artificial intelligence and will label them accordingly. We expect that policies of this nature will be enforced with the same discipline – or lack thereof – we have seen for the platforms’ other policies. 

The leaders of the mammoth companies that develop these technologies seem to understand the risks – if not to democracy, then to themselves. They’ve seen their peers – and in some cases, their own CEOs – hauled before Congress in past years to defend their policies and practices in regard to content moderation, and they know that elections are a hot spot for Congressional leaders looking to cut in with AI regulation (three bills so far). They’ve read the letters from legislators asking federal agencies to have plans in place to address all the possible uses and ramifications of artificial intelligence in the electoral process, including targeting Black and brown and other minority communities. So they’ve put policies in place – at least on paper – regarding the use of their products in regard to elections. For example, OpenAI prohibits anyone from using ChatGPT to create materials targeting specific voting demographics. This rule serves to prevent people from abusing the platform to spread targeted disinformation on an extremely fast and far-reaching scale. Yet, ChatGPT has generated targeted campaigns almost instantaneously when asked. An analysis by the Washington Post has shown that when prompted with specific enough requests to match messages with targets and candidates, ChatGPT complied with ease. It told suburban women that Donald Trump “prioritize[s] economic growth, job creation, and a safe environment for your family” and told urban dwellers about ten of President Biden’s policies, such as climate change mitigation and student loan debt relief, that would appeal to younger voters. Even months after OpenAI was notified about these prohibited actions, ChatGPT had yet to be fixed. OpenAI has also announced that they’re developing and testing AI technologies for moderation purposes. But, there has been little reporting on their success, and in the best circumstances, they will face that same trust and safety learning curve that companies like Meta have been scaling since 2016.  

These efforts by the AI companies and the platforms that host their content may not be enough. As Meta’s President of Global Affairs pointed out, “No one tech company, no one government, [and] no one civil society organization is able to deal with the advent of this technology.” 

Dominant Platforms Have Lowered Their Own Defenses

The new risks of generative artificial intelligence are compounded by trends within the tech industry since the 2020 and 2022 elections. Tech companies have been leaning away from content moderation and from taking responsibility for the content on their platforms through changes in staffing, cutting out independent research, and changing internal policies.  

X (the platform formerly known as Twitter), Meta, Google, Amazon, and Microsoft all took steps to cut down their content moderation departments. Since its acquisition by Elon Musk, X Corp. has moved to cut 30% of its trust and safety staff and 80% of its safety engineers going into 2024. Meta, Google, Amazon, and Microsoft have gone down similar paths with significant cuts to their workforce, including major cuts to the content moderation teams. Meta’s cuts also directly gutted their ability to pursue strong and principled content moderation, letting many of its policy staffers go. Current and former Meta trust and safety employees have raised concerns that these cuts will hamstring the company’s ability to respond to political disinformation and foreign influence campaigns and could make Facebook, Instagram, and WhatsApp dangerous places for disinformation to fester and grow. Alphabet Inc. (the parent company of Google and YouTube) cut policy experts and regulators, leaving only one person responsible for misinformation and disinformation worldwide. They furthered the issue by laying off at least a third of the employees at Jigsaw, leaving the subsidiary that develops tools to combat disinformation with a “skeleton crew.”

In addition to gutting content moderation teams and tools, platforms have denied independent researchers access to study their practices and outcomes. These independent audits of social media platforms have been critical to understanding the impacts and developing new tools to protect our elections and civil discourse. Meta and X have both moved to curtail access, with Meta pulling its support from Facebook’s CrowdTangle, a social media analysis tool, and X taking down its Premium API, including its Search and Account Activity API, making it extremely cost-prohibitive for smaller research institutions or researchers without institutional backing to study these platforms.

Some platforms have also softened their own policies related to election disinformation. For example, in June of 2023 YouTube stopped taking down videos that claimed the 2020 elections had “widespread fraud, error, or glitches,” committing to open “debate of political ideas, even those…based on disproven assumptions.” In August, X reversed course from 2019 and decided to allow cause-driven and political ads back onto its platform, and in December, Meta announced that claims that the 2020 election was “rigged” or “stolen” are no longer of concern and do not violate its policies.  

Other Participants in a Complex and Interconnected Battlefield

Several platforms have accompanied these changes in content moderation policy with algorithmic changes – or actual business strategies – that deemphasize reputable news. Threads has communicated that it “will not amplify” news in an effort to make the nascent platform less toxic than Twitter. Instagram will not place “political content,” including content “potentially related to things like laws, elections or social topics” on its recommendation surfaces. X removed headlines from the key images representing news stories, ostensibly to “improve aesthetics” but probably to keep users from clicking off the platform. Traffic referrals to the top global news sites have “collapsed” over the past year, deteriorating both our current information environment and, due to the related declines in publisher ad revenue, the prospects for our future one. The solution to disinformation cannot be zero information; such a vacuum just leaves the space for false narratives to fester. 

All of this is unfolding against a backdrop of an orchestrated effort by some policymakers to equate government collaboration with platforms – even on the most fundamental pillars of democracy, like ensuring accurate information about when and where to vote – with censorship and suppression of conservative political viewpoints. We talked more about this in a recent blog post and it will come under scrutiny in oral arguments in a Supreme Court case this week. 

Lastly, as some analysts have pointed out, the greatest disinformation threat in 2024 may be politicians themselves. Particularly since the twin 2020 topics of COVID-19 and the U.S. presidential election, academic researchers have repeatedly pointed to political elites as the greatest source of networked disinformation. 

We Vote for Proactive Policy Solutions 

Since many of the risks we can anticipate for the 2024 elections are the same as in the past, many of the policy solutions Public Knowledge has advocated for in regard to free expression and content moderation still apply. For example, we advocate for policies that can break the anticompetitive lock that the dominant legacy platforms have on public discourse. Appropriate agencies should enforce relevant antitrust laws against platforms that have engaged in monopolistic practices to protect their positions. Competition policy should be used to foster more competition to help bring about more user choice. Congress can and should pass legislation that would loosen dominant platforms’ control. For example, it should require the platforms to design their systems to be interoperable, so that users can switch to competing services with content moderation policies aligned with their values without losing their social networks. Doing so would diversify the public discourse so it’s not dependent on any one platform, owner, or set of content moderation values, while also promoting the innovation of higher quality products that would better serve consumers and, by extension, voters and our democracy. 

Congress should also enact a national privacy law – with, at minimum, the protections that the American Data Privacy Protection Act (ADPPA) provides – to give users greater control over their personal data, make it easier for users to switch between competing services, and make it harder for platforms to obtain and monopolize access to the data. Reducing platforms’ ability to customize content to distinct audiences also reduces the potential for harm, including disenfranchisement and political violence. Fake content can be identified faster and counteracted more effectively if more than true believers see the ads, articles, and content. 

We can also reduce the harms of disinformation by requiring platforms to increase transparency regarding their algorithms, their underlying data, particular forms of content such as paid advertising, and/or the outcomes of algorithmic decision-making, all without incurring the risk of violating constitutional protections for speech. This can take the form of requiring sharing of appropriate information on their activities to researchers (as the Platform Accountability and Transparency Act (PATA) does), regulators (including the dedicated digital regulator we advocate for), independent auditors, and/or the public. Users should also be able to understand the terms of service they are asked to sign, understand what they imply in terms of online experience, and expect platforms to enforce them consistently, including due process rights for action on content. 

The harms associated with online content are amplified – if not created – by product design that increases time, deepens engagement, and introduces more extreme forms of content to users over time. The role of a dedicated regulator could encompass the creation of liability for “defective design” or “defective features” of digital platforms that cause individual or collective harm (like endless feeds, intermittent variable rewards, ephemeral content, and filters, to name a few). 

We can further offset the impact of poor platform content moderation on the information environment by promoting other diverse sources of credible news. Congress should pursue policy solutions to support local news, including by fostering alternative business models and diverse and local ownership and representation. We currently support the Community News and Small Business Support Act, for example, which empowers small businesses and newsrooms themselves through a set of tax credits. We also support the Save Local News Act, which would make it easier for print and online news outlets to register as 501c3s. And we’re intrigued by proposals for policies to prevent more consolidation and trigger de-consolidation in media – “replanting” newspapers back into communities – such as financial incentives to spur local nonprofit organizations, journalist-owned outlets, or mission-oriented businesses to buy newspapers, or to hedge funds to sell them.

Combating Generative AI Disinformation 

We reviewed the options for mitigating the potentially-exponential impact of generative artificial intelligence on our information environment in our previous post. At its core, we believe that oversight and regulation ensuring responsible AI development is in the public interest, and like many other civil society groups, we are engaging with policymakers, industry, researchers, and activists to formulate specifically-crafted legislation focused on oversight, transparency, and accountability. 

In the Meantime, Join the Fight

Protecting against disinformation – and, by extension, protecting our elections and our democracy – will require both public and private actions. We acknowledge that many of the federal policies we advocate for are long-term solutions (that’s not necessarily bad, as we can expect some of these dynamics to continue, if not get worse, over time). In the meantime, there are both public and private actions that can help. 

In terms of generative artificial intelligence, a straightforward place to start would be Federal Election Commission rules requiring disclosure of content created by generative artificial intelligence in campaign advertising. Industry could follow through on its stated support for the development and adoption of technologies to track and detect the provenance of important and newsworthy images and media. Election officials can ensure they have a general understanding of AI tools and the opportunities and threats that they create for election administration. 

At the same time, platforms can adopt some of the simplest proven strategies for increasing trust in elections, such as prebunking and credible corrections to disinformation narratives, and voters can help by learning and applying digital literacy, diversifying their sources of news, following suspicious information upstream to its original source, and taking more care of what they share online. Consumers can also take advantage of resources designed to monitor the most prevalent disinformation narratives (NewsGuard is one), or take a quiz to determine if they might be susceptible to disinformation. Use caution, though: a new report suggests that “doing your own research” in the form of fact-checking through search platforms can lead to people being more misinformed. 

It will take a concerted effort across our society to make advances in the disinformation war, and it’s not too early to start.