So Far, the Biggest Threat to Election Integrity in 2024 Isn’t Deepfakes – It’s Bad Content Moderation

In order to produce a more truthful and healthy information ecosystem, we need consistent and clear content moderation.

Now weeks away from the 2024 Presidential election, our social media feeds are replete with warnings and accusations that election-related content is fake or untrue. We were alerted time and again that generative artificial intelligence (GAI) deepfaked images would have an untold influence on public opinion in this year’s election, regardless of the political tilt of the content. When we focus solely on GAI’s potential to be disruptive in our elections, we lose sight of the very real threat: traditional disinformation, often amplified by political figures themselves.

Many folks, regardless of the use of AI, are increasingly wary of the veracity of content they see on their feeds. As Pew Research found, while more than half of Americans get their news from social media, 40% are frustrated that news content on platforms can be inaccurate (a 9% increase between 2018 and 2023). Therein lies the problem: when the majority of Americans get their news from social media, but many have little faith in the accuracy of that information, it becomes a huge undertaking to determine what content is true and what is not. 

Election disinformation (intentionally stating false information to mislead) is not a new phenomenon. Politicians have all the incentive to peddle outright lies to garner favor (or sabotage opponents). Even though disinformation has always been deeply intertwined with elections, social media platforms remain, at best, inconsistent and, at worst, irresponsible in how they deal with election-related content. With the 2024 presidential election just weeks away, the controversy surrounding platform decisions on moderating election-related content has reached its peak intensity.

Whether it is AI-generated content, or politically motivated disinformation peddled by social media commentators, there is no doubt the decisions platforms make have important implications for trust and safety. The solution here is not necessarily to hold platforms accountable for the slew of election disinformation on their platforms, but to push them to adhere to their own content policies. Combined with expanding access to good, quality news and information to counterbalance toxic, harmful disinformation, we will have a better shot at accessing productive, truthful, and healthy information. 

AI Is Not the Threat They Warned Us About

This year, dozens of other democracies held major elections before the United States, including the United Kingdom and the European Union. As it turns out, AI-enabled disinformation hasn’t really had an impact on election results, according to a study from the Alan Turing Institute. As the research indicated: sure, there were a handful of deepfaked images that went viral, but those amplifying the content tended to be a small number of users who aligned with the ideological narratives embedded in such content. In other words, the loudest, most divisive voices on platforms tend not to influence the undecided voter – and we can anticipate a similar takeaway from the U.S. election.

So far, users are actually pretty good at figuring out when a photo is AI-generated. Unfortunately, this is probably a symptom of toxic information systems that make us increasingly suspicious – and increasing distrust is not a good thing. Right now, most GAI-generated photos have a “tell” – there are two left hands, the lighting is a bit off, the background is blurry, or some other anomaly. Pretty soon, deepfakes will be indistinguishable from non-AI-generated content, and potentially disseminated at a scale far too large for humans to review and moderate – as we noted earlier this year.

But that’s not to say we, the general social-media-using public, can always determine when something is fake and intended to mislead. Generative artificial intelligence has all of the makings for causing all sorts of problems. To this point, the threat of GAI deepfakes hasn’t caused the problems we anticipated, but that isn’t to say it won’t. 

Lawmakers and regulators are already scrambling to respond to the perceived threat of GAI on elections, with mixed success and ongoing debates on First Amendment issues. In any case, many of these laws and regulatory decisions are coming just weeks before the November election day, and will probably not have any meaningful effect on the use and impact of deepfaked elections content. What should be the center of attention, instead, is what’s really driving elections-related content issues: potentially harmful disinformation coming from partisan opportunists amplifying false information from people’s mouths (or fingertips), and platforms’ disinclination to deal with that content. 

Partisan Opportunists Amplify the Neighborhood Rumor Mill

We’ve seen some interesting cases of knowingly false information being peddled and amplified to bolster political platforms. Former President Donald Trump’s baseless claim that Haitian immigrants were eating pets in Springfield, Ohio, originated from a local woman’s fourth-hand account posted on Facebook, which was quickly debunked by police but nevertheless amplified by Senator J.D. Vance (R-Ohio) and even repeated by Donald Trump during a presidential debate. When “my neighbor’s daughter’s friend said Haitians are eating our pets” is considered a good enough source of information to bolster a political platform on immigration policy, it is not hard to see why the majority of Americans are wary of the accuracy of news on their feeds. And it’s not just right-leaning political content pilfering disinformation; liberal social media accounts have taken opportunities to spread misinformation about the otherwise alarming Project 2025 policy proposals for a second Trump administration. 

Allowing verifiably false information to fester on platforms does not just make for a messy feed, it can also have harmful effects. In the wake of Hurricane Helene and Hurricane Milton, which caused disastrous destruction throughout the Southeast U.S., a barrage of conspiracy theories emerged. Most of the disinformation is targeted at the Federal Emergency Management Agency (FEMA), with claims that President Biden is withholding disaster relief in predominantly right-leaning constituencies to make it harder for those citizens to vote. FEMA has gone as far as setting up a “rumor response” page on its website to dispel that myriad of speculation-turned-disinformation inundating social media platforms. When folks reeling from the Hurricane Helene and Milton disasters are told not to trust the government agency charged with providing immediate assistance, life-or-death situations are made all the more dire. 

To be clear, Public Knowledge’s foundational principles uphold the right to free expression. We also believe in holding platforms accountable for setting and enforcing standards for moderating content that can potentially cause harm. Users should understand the terms of service of their chosen platforms, understand what that means in terms of content policies, and expect platforms to enforce those policies consistently. Yet, to date, platforms have done a pretty inconsistent job of dealing with problematic election-related content.

Slowing the Momentum of Potentially Harmful Content Is Not Election Interference – It’s Content Policy at Work

Earlier this year, Iranian hackers allegedly took from a Trump staffer a “J.D. Vance Dossier” containing a 271-page background report detailing Sen. Vance’s potential vulnerabilities if he were to be selected as presidential nominee Donald Trump’s pick for vice president. Major news outlets who received the stolen dossier decided not to report on it, believing it to be not newsworthy. More likely, the dossier was acquired under sketchy circumstances (allegedly a result of foreign operations), and reputable news outlets were hesitant to amplify unconfirmed information – not unlike the Hunter Biden laptop controversy.

Nevertheless, independent journalist Ken Klippenstein linked the dossier on his X and Threads account, believing it to be “of keen public interest in an election season.” He was promptly banned from X. Links to the document were also blocked by Meta and Google, but remain on Klippenstein’s substack site.

At first blush, the X and Meta’s action to limit the distribution of the dossier may run afoul of X owner Elon Musk’s proclaimed free-speech absolutist views, and Meta owner Mark Zuckerberg’s recent statement to the House Judiciary Committee that he will be “neutral” in dealing with election-related content. In reality, the platforms’ decisions to moderate Klippenstein result from exactly what we are asking platforms to do – to act according to their content policies. Klipperstein violated X’s Privacy Content Policy, which states, “You may not threaten to expose, incentivize others to expose, or publish or post other people’s private information without their express authorization and permission, or share private media of individuals without their consent.” (X later reinstated Klipperstein’s account, not as a result of any appeals process, but likely to save face and uphold X as the “bastion of free speech” its owner likes to promote it as. After all, it was revealed that the Trump campaign pressured X to limit the circulation of the dossier, revealing the hypocrisy of decrying the Hunter Biden laptop controversy). Meta has a similar policy of removing content that shares personally identifiable and private information, and more generally “information obtained from hacked sources.” 

Blocking Klippenstein may seem an outlier for those who feel social media platforms are replete with liberal bias and over-censor conservative content. The issue in the Klippenstein debacle is not that the largest social media platforms are blocking the sharing of the J.D. Vance Dossier. The issue is that this demonstrates platforms apply their content policies inconsistently and without recourse. 

Researchers from Oxford, MIT, Yale, and Cornell recently looked into the question of “asymmetric sanctions” on right-leaning voices of platforms over liberal users. They found, as have past researchers, that conservative-leaning users tend to share more links to low-quality news sites or bot-generated content, which are more likely to violate content policies. In other words, conservative voices face more frequent moderation simply because they break the rules more often than other users. 

While researchers have confirmed that right-leaning users are comparatively more moderated, platforms still fail to moderate consistently the content that violates their terms of service. As natural disasters rampage the southeast, antisemitic hate is flourishing on X (formerly Twitter), with Jewish officials, including FEMA’s public affairs director Jaclyn Rothenberg and local leaders like Asheville Mayor Esther Manheimer, facing severe online harassment as part of the false rumors and conspiracy theories surrounding FEMA’s disaster response. Such a toxic blend of antisemitism and misinformation about FEMA’s hurricane response foments a volatile environment where online threats could potentially translate into physical harm. 

X actually has a policy prohibiting users from directly attacking people based on ethnicity, race, and religion, claiming it is “committed to combating abuse motivated by hatred, prejudice or intolerance, particularly abuse that seeks to silence the voices of those who have been historically marginalized.” There are real-world consequences of unchecked hate speech on social media platforms, and content moderation can and must play a role in mitigating such consequences. Bafflingly, posts that call for violence against FEMA workers and perpetuate hateful tropes about protected classes remain on X and gather millions of views – making it clear that the platform is woefully inconsistent with upholding its content moderation policies. 

While X’s community notes, which allow users to essentially crowdsource fact-checking a post, is an important first line of defense, it is not enough to keep up with the flood of false, harmful content. In times of crisis, platforms have a duty to have policies in place to demote or remove disinformation that could have actual repercussions. In this case, the decision to leave up false information about FEMA disaster responders could mean real victims do not receive the assistance they need and that officials face real threats of violence for simply doing their lifesaving work. 

What We Can Learn From This Mess

The 2024 presidential elections are weeks away, and the state of platform content moderation remains inconsistent at best and irresponsible at worst. While AI-generated deepfakes haven’t caused the chaos we anticipated, traditional disinformation continues to thrive, often amplified by political figures themselves.

The content moderation debate is contentious for a reason. Freedom of expression is fundamental for democracy, and social media platforms are crucial conduits for speech. Yet when online platform-based speech can instigate harm, and clearly violates content policy, platforms have a duty to act according to what they have promised. And bolstering healthy information systems requires a set of actions that go beyond platform content policies. 

If you don’t like a platform’s content moderation choices, you should be able to find a better home for your chosen speech elsewhere. In Klippenstein’s case, other platforms like Substack and Bluesky have not blocked access to the J.D. Vance Dossier – demonstrating the importance of users’ access to a robust, competitive market of social media platform options. Such is a case study on why having access to many platforms, each with slightly different content moderation policies, is important for speech. Even better – if platforms are interoperable, users can more seamlessly switch between platforms without giving up their network.  

If the content is moderated (downranked or removed) and a user faces repercussions (suspended or banned), there should be clear explanations for the content violating terms of service and a way for users to object if they feel the platform is behaving arbitrarily or inconsistently. To put it simply, platforms should give users due process rights.  

We also need to counterbalance intentionally false news with quality news. And we need pro-news policy to do that. The goal is not to eliminate all controversial content but to create an environment where truth has the best chance to emerge, and citizens can make informed decisions based on reliable information. One solution we have proposed is a Superfund for the Internet, which creates financial incentives by establishing a trust fund by collecting payments from qualifying platforms to support fact-checking and news analysis services provided by reputable news organizations.

The solution here isn’t to hold platforms accountable for every piece of election disinformation on their sites. Instead, we need to pressure platforms to adhere to their own content policies by demanding they implement clear, consistent moderation terms with due process. With expanding access to high-quality news and information to counterbalance toxic, harmful disinformation, we’ll have a better shot at fostering a more productive, truthful, and healthy information ecosystem. And if and when GAI-generated content has the sort of impact we’ve warned of, platforms will be better positioned to respond. The integrity of our democratic system, trust in our institutions, and our ability to respond effectively to crises may well hinge on these efforts.