A lawsuit accuses the Biden administration of exerting undue pressure when communicating with social media companies in efforts to mitigate disinformation campaigns. But platforms must be able to communicate with the government to provide informed content moderation and avoid preventable harms, including harms to public health and to democratic participation, both of which are at issue in the lawsuit.
The Case of Missouri v. Biden
Last month, most communications between social media platforms and the U.S. government froze due to a preliminary injunction by a Louisiana judge. The injunction was in response to Missouri v. Biden, a lawsuit brought by the attorneys general of Louisiana and Missouri. The lawsuit argues that the Biden administration improperly coordinated with social media companies like Meta (then Facebook), Twitter, and YouTube to censor conservative speakers and viewpoints under the guise of anti-disinformation and -misinformation campaigns. According to the plaintiffs, officials from the administration and Democratic legislators embarked on a campaign of threats to pressure social media companies to censor content that did not favor them. If platforms did not comply, the attorneys general argue, the government implied it could act to remove their Section 230 protections or pursue antitrust litigation against them. The lawsuit also posits that coordination efforts between government agencies (like the National Institute of Allergy and Infectious Diseases, or the Centers for Disease Control and Prevention) and social media companies to moderate COVID-19 misinformation represent acts of government censorship. The Court of Appeals for the Fifth Circuit granted a stay on the injunction per request by the Department of Justice. Last week, the parties presented oral arguments, and the appeals court will decide if the injunction must be reversed.
The original injunction barred employees of U.S. government agencies – including the Department of Health and Human Services, the Census Bureau, the FBI, and the State Department – from working with social media companies in any way that could encourage the removal of speech that is protected by the First Amendment. The injunction prohibited holding meetings, flagging or referring content for review, urging or recommending changes in content moderation guidelines, or even requesting reports about content removal. It also bans collaborating with coalitions and organizations like the Election Integrity Partnership, the Virality Project, and the Stanford Internet Observatory, which bring together different stakeholders to share information and strategies to reduce misinformation online. The injunction noted that these government agencies and employees were still allowed to inform platforms about topics like crime, national security, public safety threats, voter suppression, and election disruption, as well as to communicate to remove any non-constitutionally protected speech.
Despite these exceptions, the injunction chilled most communications between the U.S. government and social media platforms. This is because the ban on communications was very broad. As the DOJ argued in its filing for a temporary stay in the decision, the line for defining criminal activity is not always obvious, so it would have been extremely hard for the government to communicate with platforms and ensure they fit into the exceptions.
The case has attracted arguments on both sides, including an amicus brief from Stanford researchers associated with the Election Integrity Partnership and the Virality Project, which says their communications have been misrepresented and their own First Amendment rights violated, and another from Republican members of the House of Representatives asking the court to affirm the injunction.
Final Ruling Could Mean Worse Content Moderation and Greater Harms for Users
This case represents a pivotal moment for content moderation. A ruling favorable to the Louisiana and Missouri attorneys general and the very broad restrictions it would place on communication between federal agencies and platforms would radically challenge the ability of platforms to conduct content moderation based on authoritative information, and likely worsen the quality of information that U.S. users are shown in social media.
The lawsuit essentially accuses the Biden administration of “jawboning.” Jawboning refers to the use of official authority to influence actions of private actors. When it comes to social media platforms, this is an especially common occurrence. In the past, both Republican and Democratic administrations and elected leaders have met with platforms to discuss policies and approaches, and members of Congress from both parties have even used congressional hearings to ask tech company officials about specific examples of content moderation decisions they disagreed with. This is in the right of political leaders to do so, and platforms can benefit from this input. Instead, the lawsuit argues that all communications between the platforms and the federal government were part of a coordinated suppression campaign, arguing it amounts to undue pressure or illicit collusion. This would not only radically change how both parties engage with platform companies, but would also isolate democratically elected officials from discussing the role of platforms in public life.
Accepting this overly broad understanding of inappropriate jawboning could also suppress all future collaboration between the government and platforms on key issues like national security, election integrity, and public health and safety. Public Knowledge’s view is that to combat the prevalence of misinformation with the potential for harm, platforms should engage more, not less, with stakeholders to prepare for and manage events conducive to narratives with the potential for harm. Conversations between governments and platforms are necessary for users to enjoy a healthy information ecosystem and to avoid preventable harms. Due to their access to unique government resources, public officials have updated and expert information about national security, public health emergencies, and developing elections, which platforms are unlikely to have. Upholding the injunction would make coordination on these critical issues impossible, as the DOJ has argued. In light of the 2024 election, as other civil society groups have also warned, broad legal limits to this collaboration are especially worrying. We have already experienced the real-world impact of disinformation about the integrity of U.S. elections.
More broadly, upholding the injunction and curtailing the flow of information of civic interest would deter competition and benefits to users. Content moderation is not government censorship; it is an editorial decision by the platform. Content moderation is a key way for platforms to differentiate themselves from each other in the marketplace and provide beneficial services to their users. To effectively achieve this, platforms must be able to choose to engage with government officials as well as other stakeholders. For example, it is difficult to imagine how platforms can design effective content moderation strategies in regard to health misinformation without access to authoritative information from the Department of Health and Human Services or the Centers for Disease Control and Prevention. Of course, platforms must be open about their content moderation policies and practices. As Public Knowledge has advocated for, users must be able to know the terms of service and community standards that are in place, and to appeal content moderation decisions. But the policies and practices established by platforms should be informed by the stakeholders platforms choose, including government agencies.
Another troubling element of the injunction is the prohibition of the government to communicate with the Election Integrity Partnership, the Virality Project, and the Stanford Internet Observatory for any purpose linked to the removal of protected speech. As Stanford University argued in its amicus brief, this goes against the right of civil society organizations to communicate freely with the government. Researchers and nonprofits must be able to exchange information with the government on issues related to the harms that users may experience in online platforms and to make recommendations. Accepting the plaintiff’s view that civil society organizations are proxies for governments for undue pressure could seriously curtail much needed research and advocacy about social media platforms.
Whatever the outcome, the effects of this lawsuit are already rippling. Even if the injunction is overturned, government agencies will think twice about communicating with platforms in the public interest. Platforms may also dial down their content moderation efforts in regards to health or election misinformation, as even legitimate coordination with governments may be seen as carrying legal risks. And beyond the case, the threats against informed content moderation will continue. In July, Republican members of both the House and Senate introduced bills designed to “prohibit Federal employees… from directing online platforms to censor any speech that is protected by the First Amendment.” Both bills refer to the same history and are based on the same logic as the attorneys generals’ lawsuit. If passed, they would pose the same dangers to the U.S. information environment.
Broad legal restrictions to conversations between the government and social media companies threaten the ability of users to access credible information online, and compound the risk of harm to users. While we must ensure that platforms are not under undue pressure from the government, decisions that restrict informed, independent, and responsible content moderation will likely lead to the further deterioration of our information ecosystem.