Social media platforms are heavily scrutinized each election year for how they handle election-related content. And for good reason. Most voters get their information from online platforms, podcasters, and news influencers, and the candidates who best use these mediums enjoy favorable election results.
Now that the dust has settled and the new administration is in place, we can look back at how platforms handled the flood of election content – particularly their response to false claims and potentially harmful misinformation that violated their rules. This assessment is crucial for understanding not just what worked and what didn’t but how these decisions shaped the information voters relied on to make their choices.
Why Platforms Changed Their Approach to Political Content
Defining the platforms’ decisions this past year is the pressure to correct perceived injustices since the previous presidential election, including the strict moderation of COVID-19-related content, the platform-aided incitement of mob violence on the U.S. Capitol on January 6, 2021, and the ban of President Donald Trump from Twitter and Facebook for inciting the mob. The outcry from the likes of House Judiciary Committee Chair Jim Jordan and President Trump of alleged “censorship” has pushed platforms to revisit their content moderation practices (that is to say, do less of it). Conversely, in June of last year, the Supreme Court ruled in the Murthy v. Missouri case, which claimed that communications between Biden administration officials and social media platforms amounted to government censorship. The court decided the plaintiffs lacked standing in proving they were harmed by the communications.
Many targets in the misleadingly labeled “censorship cartel” have bowed under pressure, making content moderation policy decisions to capitulate to the new administration’s demands. Trust and Safety teams have shrunk, the umbrella for acceptable content has expanded, and public entities banned from platforms for content that violated its terms of service have been reinstated.
The free speech crusade also targeted researchers and organizations that track and analyze the spread and impact of disinformation on elections. (To be clear, these researchers do not censor anything; they do not force platforms to suppress content. Reiterating the legitimacy and accuracy of information is not intended to censor but to give users the full scope of context to make their own decisions about what they see.) Meanwhile, online platforms removed or restricted access to content monitoring tools like CrowdTangle, making it harder for researchers to track content and information flows.
National security and intelligence agencies released memos warning of the threats of foreign malign influence operations on elections. The memos described “inauthentic social media personas […] intended to infiltrate targeted online communities” and how “generative artificial intelligence tools have made it much easier and cheaper to generate and spread convincing foreign malign influence content.” Indeed, as the months proceeded in the election season, American intelligence and law enforcement agencies revealed covert information operations by foreign adversaries using AI to create fictitious social media profiles at an unprecedented scale.
The threat of generative AI concerned policymakers, with a flurry of federal and state bills introduced ranging from prohibiting the use of AI in election-related content, to requiring disclosure of the use of AI in political advertising. Few of these AI and election bills moved much, and if they did, they smacked into First Amendment legal challenges.
As we explained last year, AI can certainly threaten election integrity. But while AI was used in increasingly sophisticated influence operations, such efforts have largely proved futile. Instead, the greatest influence on American voters continues to be individual politicians and political influencers who best utilize social media, and the platforms that enable those individuals. That said, this post is not to draw conclusions on whether the proliferation of false elections-related content had a meaningful impact on the election outcome. It is meant as a comprehensive recapping of platform decision-making during elections, and an analysis of whether those decisions had their intended effect.
Assessing Platform Decisions Surrounding the Election
Divisive rhetoric is really engaging, eliciting flurries of comments and reposts – especially in a political environment where the electorate is a near-perfect 50/50 split. Algorithms amplify the divisive content, acting as a siren song for users who love to get the last word in. Naturally, certain users hoping to garner attention on platforms are incentivized to intentionally post incendiary content, which content moderation rules may hinder. What results is a feeling of being silenced, and viewpoints being suppressed arbitrarily. When political candidates decry this alleged “censorship”, it validates these users’ beliefs that platforms are systematically silencing certain political viewpoints, even though evidence doesn’t support these claims.
Making matters more complex, platforms often aren’t transparent about how they make content moderation decisions. During recent elections, different platforms took widely varying approaches – from allowing almost everything to banning political content entirely. Each choice drew intense scrutiny from users, many of whom were already primed to see content moderation as politically motivated censorship rather than good-faith efforts to maintain platform integrity.
X / Twitter
When Elon Musk bought Twitter in 2022, he shared internal documents he called the “Twitter Files” with journalists he selected. These files revealed the complex decisions behind two major content moderation calls: banning President Trump after January 6, and limiting the spread of the Hunter Biden laptop story. While Musk framed this release as exposing bias, one of his chosen journalists, Matt Taibbi, found no evidence that the government had pushed Twitter to suppress the laptop story. Still, the release sparked Congressional investigations and deepened public skepticism about how social media platforms make decisions about what we see online.
One social media flashpoint in the election cycle was the assassination attempt of then-presidential nominee Donald Trump on July 13. Conspiracies, naturally, exploded across platforms, trying to make sense of what motivated the failed assassin. Some users on the Left suggested it was a maneuver to gain sympathy for Trump, while others on the Right accused the Secret Service of deliberately leaving him open to attack. Both sides blamed each other for promoting the hostile rhetoric on social media that incited such violence. Days after the assassination attempt, Elon Musk declared on his platform, X/Twitter, that he supports Donald Trump for president, revealing his plan to make X/Twitter the unofficial platform of the Trump campaign.
In the months following his endorsement of Trump, Musk used X to bolster his relationship with the now-president, amplifying Donald Trump’s posts and those of his supporters. By August, X/Twitter contained 17% political content, up from 2% when Elon Musk took over the platform.
Meta
Election season for Meta is undoubtedly a contentious, stressful time. Facebook has, in elections past, been the premier Bad Guy. Most of the time, for good reason – between the Cambridge Analytica scandal and serving as a hotbed for COVID-19 conspiracies, Meta seems to fumble its response time and again. When you have 3 billion global users, and only so many people on the Trust and Safety team, Meta is bound to make someone unhappy.
Meta, like Twitter, reinstated Donald Trump on its services in early 2023, with some restrictions – like if he were to use the platforms to delegitimize the election. By mid-2024, Meta lifted those restrictions supported by the belief that “the American people should be able to hear from the nominees for president on the same basis.”
Characterizing Meta’s election content moderation decisions was Mark Zuckerberg’s apparent capitulation of more dominant conservative criticisms of platform-enabled censorship since the Hunter Biden laptop controversy. Demoting the Hunter Biden laptop story was a practice according to Facebook’s policies at the time: limit the spread of content that is, as the U.S. government warned, potentially a lie spread by foreign adversaries so that independent fact-checkers can confirm its veracity. Yet once parts of the New York Post story were verified, Republican lawmakers launched an all-out war against platforms’ fact-checking efforts. In August, under the pressure from Republican Representative Jim Jordan’s relentless smear campaign against Meta for its treatment of the Hunter Biden laptop scandal, Meta CEO Mark Zuckerberg conceded in saying, “In retrospect, we shouldn’t have demoted the story. We’ve changed our policies and processes to ensure this doesn’t happen again – for instance, we no longer temporarily demote things in the U.S. while waiting for fact-checkers.”
Zuckerberg’s letter missed the mark in a few ways. First, he hamstrung Meta from moderating sensitive content related to elections, or else they face unbearable scrutiny from users and policymakers alike. Second, he (falsely) reaffirmed the misguided idea that conservative voices are particularly over-moderated as a result of pervasive liberal bias in social media companies.
These positions set the stage for Meta’s January 7 announcement, in which the company abandoned traditional fact-checking in favor of community notes. The company framed this decision as promoting free speech, echoing Donald Trump and Representative Jim Jordan’s rhetoric. This shift, coupled with Meta’s revised hateful content policy—particularly its relaxed stance on LGBTQ+-related content – suggests a broader realignment with conservative political narratives rather than a genuine commitment to open discourse.
Threads
Meta’s Threads platform, launched in Summer 2023 as a text-based competitor to X/Twitter and integrated with Instagram’s user base, reached nearly 300 million daily active users by the end of 2024. The platform’s handling of the election, however, proved problematic. While many users who had left Twitter hoped to use Threads for real-time election coverage, they encountered a significant obstacle: the platform’s non-chronological feed algorithm, randomly displaying posts from the previous 24 hours. This design choice aligned with Meta’s broader strategy to minimize news and political content, aiming to create a more hospitable environment than its rival or avoid past election-related challenges. However, the approach rendered Threads essentially unusable for election night coverage, ultimately driving users back to X/Twitter and other platforms for real-time updates – or to Bluesky.
Bluesky
Bluesky, the new kid on the block, endured its first presidential election – and came out a winner. The former Twitter exec, Jack Dorsey, launched Bluesky a year before offloading Twitter to Elon Musk. Despite being yet another text-based platform, Bluesky occupies a very unique niche. Bluesky is open source and decentralized, giving users greater control over their feeds, specifically how content is organized and moderated. While the platform still suffers from many of the same moderation quagmires that other platforms face, like establishing the threshold by which a user should be banned, many users felt Blueksy offered a healthier online experience. Those who believe content moderation makes for better online experiences defected from the more lawless X/Twitter, especially in the weeks following election day – resulting in over 25 million new Bluesky users.
TikTok
Keeping with historic policy, TikTok maintained its prohibition of paid promotion of political content. In addition, the platform created an in-app U.S. Election Center and Election Integrity Hub as media literacy resources, and put content labels on state-controlled media and unverified election claims.
Despite its laudable efforts regarding election integrity, in April, Congress unanimously voted to force TikTok to divest from its Chinese parent company, ByteDance. Politicians passed the ban over concerns that the Chinese government could use Tiktok to access Americans’ sensitive data, and manipulate users by using the platform’s algorithm to amplify misinformation. While there is no definitive proof that the Chinese government is manipulating TikTok content and its users, President Biden signed the ban into law, with a divestment deadline of January 19, 2025. The law’s First Amendment implications reached the Supreme Court, which upheld the ban due to national security concerns, establishing a significant precedent for U.S. lawmakers to restrict platforms promoting undesirable speech.
Takeaways
Platforms Continue To Wrestle with Free Speech Principles
The job of platforms to determine what content gets distributed to which users, and at what rates it is amplified is a thankless and powerful position. Any decision to allow controversial content to remain up, to remove said content, or to deamplify it is bound to make someone angry. Even more so, thanks to today’s disagreements over moderation that have become highly politicized, akin to an assault on free speech itself.
Central to the platforms and election conversation is whether and how these platforms deal with blatantly false information. The free speech absolutists say that any and all content should be accessible, even content that is demonstrably false and intended to mislead, and it should be up to the users to determine how to deal with that content. We at Public Knowledge have pointed out that enabling the free flow of hate speech in the name of free expression deters the speech rights of some, and that online false information can lead to real-world harm. Take Gamergate, when game-makers and critics – many of them women – advocated for more inclusion in gaming and faced vicious anti-feminist opposition. Opportunists popularized #Gamergate on Twitter to galvanize the alt-right against those advocating for more women in the gaming industry, going as far as tweeting public death threats against women game developers. Not only were feminist advocates targeted by a barrage of unmoderated vitriolic posts on platforms like Twitter and Reddit, they were also doxxed, “SWATted,” and bullied off the platforms – effectively chilling those advocates’ speech.
This is a good time to remind readers that the First Amendment guarantees the right to express yourself without government interference. Section 230 of the Communications Decency Act allows platforms to determine what content can and cannot be on their platform. In fact, moderating content is considered the platforms’ expressive activity and is, therefore, protected by the First Amendment, according to the Constitution as affirmed by the Moody v. Netchoice / NetChoice v. Paxton decision last summer. Likening fact-checking and content moderation to censorship promotes the misguided view that restricting hate speech and harmful content is akin to a First Amendment violation, making it all the more difficult to promote healthy online spaces without the threat of vicious backlash.
User Choice and Control Over Feeds
The election season taught us that people prefer choice and the ability to curate their own feeds with moderation standards suited to their preferences and principles. Decentralized platforms were testing grounds for a new type of social medium known as the fediverse. On federated platforms like Bluesky – or interoperable platforms that allow users to post content from anywhere, have it appear on all their followers’ feeds, and attract followers from other platforms – users can create their own servers and implement their own content governance rules. Threads proved that importing followers from another platform can make new social media options competitive, but lack of control over content moderation policies can push new users away. Meanwhile, Bluesky showed how robust user control over content moderation is a strong selling point.
As a result, we saw that users, rather than choosing the platform with the features they like, choose platforms that they believe match their values – which tend to also be politically oriented. X/Twitter was made out to be a conservative watering hole, while Bluesky became the safehaven for progressive liberals. There are valid concerns that the deepening cleavage among users will exacerbate our already splintering society. As a result of information silos, depending on your social media habits, people exist in entirely different realities. At a time when Americans across the political spectrum are wary of traditional news media, and some view social media content from individual contributors as more trustworthy than established journalists, it will be even harder to find a common truth.
The Exigent Need To Reinforce Healthy Information Systems
No matter where you fall in the content moderation debate, the fact is that the quality of our feeds on dominant platforms is worsening. It’s not just that disinformation and conspiracies run rampant, but also the prevalence of AI slop and “pink slime” journalism, combined with the platforms’ decisions to demote quality news to reduce politics in feeds, has resulted in the dominance of poor-quality information. Better content moderation policies are not a catch-all solution for cleaning up the feeds. In fact, there is no panacea for reversing our worsening information systems. It will take a mix of thoughtful policy and regulation and an unwavering commitment to public interest principles.
The 2024 election proved that users like choice. Real choice comes from a competitive marketplace, where new platform entrants (especially federated platforms) have a fighting chance against established social media behemoths. It also comes from reinforcing principles of free expression, which include minimizing the reach of hate speech that would otherwise silence protected classes of users. When platforms do take action on content, whether through removal or reducing its reach, users deserve both transparency about these decisions and due process to appeal them.
A healthy online system needs plenty of reliable, high-quality information, particularly from trusted news sources. But good content alone isn’t enough – we also need to help people develop the skills to navigate today’s complex information landscape. This means teaching practical media literacy skills, like cross-checking sources (what experts call “lateral reading”), so everyone can better judge what they’re seeing online. In short, the path to healthier online spaces isn’t through quick fixes or heavy-handed controls, but through thoughtfully designed systems that give people both the tools to make informed choices and the skills to navigate the information system.