Content Moderation Is Not Synonymous With Censorship
Content Moderation Is Not Synonymous With Censorship
Content Moderation Is Not Synonymous With Censorship

    Get Involved Today

    Tomorrow, Twitter CEO Jack Dorsey and Facebook CEO Mark Zuckerberg will testify before the Senate Judiciary Committee to discuss censorship, the suppression of news articles, and the companies’ handling of the 2020 presidential election. Last time Dorsey and Zuckerberg were brought in to the Senate to testify, the hearing turned into an opportunity for some members of that Committee to air their grievances about alleged “conservative bias” on prominent social media platforms. Unfortunately, Tuesday’s hearing promises to be more of the same as it features a number of the same members.

    It is crucial to understand that content moderation is not synonymous with censorship.

    Reckless use of the word “censorship” by public officials, in discussing the content moderation practices of privately-owned platforms, conflates the unique meaning of these two very distinct concepts. Censorship, which is the suppression or prohibition of speech or other communications, can sometimes cause real harm for marginalized communities and anyone holding and expressing a minority viewpoint. Content moderation, on the other hand, empowers private actors to establish community guidelines for their sites and demand that users seeking to express their viewpoints are consistent with that particular community’s expectations of discourse, yielding tangible benefits such as flagging harmful misinformation, eliminating obscenity, curbing hate speech, and protecting public safety. Put another way, some content moderation includes censorship, while other forms (fact-checking for example) are not censorship since they do not suppress or prohibit the original speech. Conflating the two ideas in order to allow for the spread of disinformation or hate speech is disingenuous and dangerous. It may feel cathartic for some policymakers to rail against companies alleging censorship, but in order to better serve the American public, a rational conversation concerning the proper place of content moderation is what is needed.

    The First Amendment only applies to actions undertaken by the government, so-called “state action.” As private entities, social media platforms are therefore free to make their own editorial decisions and develop their own community standards. Users of these platforms still have broader free expression rights that are important to recognize, but they go beyond the narrow First Amendment rights discussed here. Further, some content moderation practices can amount to censorship, just not the kind scrutinized by the First Amendment. Whether or not this type of censorship is beneficial is an entirely different issue. For example, we believe it is acceptable for a privately-owned social media platform to establish community standards that allow for the removal of pornography. The First Amendment gives individuals the right to free speech and is intended to protect the American people from government censorship. It is not intended to empower the government to dictate what platforms can and cannot remove from their websites. There is a balance, and calling all content moderation “censorship” as a means to political ends throws this balance completely out the window.

    Distinguishing between content moderation and harmful censorship is important, because critics often claim (without evidence) that prominent social media platforms are violating their freedom of speech rights by fact-checking posts or removing harmful misinformation. This is not to say that regulation of platforms’ content moderation practices is unconstitutional or that Facebook and Twitter can do whatever they want because they are privately-owned companies — only that the First Amendment cannot be used to compel them to host certain types of content or treat certain types of content in a specific manner.

    Companies like Facebook and Twitter are moderating their platforms, a process which includes setting their own community standards, blocking content, fact-checking, labelling content, and demonetizing pages, which has been found by courts to be fully protected First Amendment expression. Although these practices impact the speech of some individuals, content moderation practices that empower user expression actually allow for more speech — not less. Even content moderation practices that do amount to censorship, such as take downs of pornographic or obscene content, are both beneficial and acceptable under the First Amendment.

    In Prager University v. Google (9th Cir. 2020), the U.S. Court of Appeals for the Ninth Circuit answered the question of whether or not these practices are constitutional under the First Amendment directly. To briefly summarize the case, Prager University (PragerU), a conservative nonprofit, brought suit against YouTube for tagging some of its videos as appropriate for restricted mode — a setting that allows YouTube to screen out potentially mature content on behalf of libraries, schools, public institutions, and other users. They also alleged YouTube demonetized some of its videos, which means the videos are removed from the pool of advertised-upon content.

    PragerU claimed that these actions violated its First Amendment rights because YouTube was performing a “traditional and exclusive” government function by hosting a forum for public speech. This argument was unconvincing. The court, relying on the Supreme Court’s ruling in Manhattan Community Access Corp. v. Halleck, a case where the Court found that the private operators of a public cable channel were not a “state actor” for First Amendment purposes, held that the First Amendment’s state action doctrine precluded constitutional scrutiny of content moderation practices. In other words, YouTube was found to not be a government actor bound by the First Amendment in this context.

    The U.S. Court of Appeals for the District of Columbia came to a similar conclusion in Freedom Watch v. Google, finding that the First Amendment “prohibits only governmental abridgement of the freedom of speech.”

    The implications of these cases are clear. Private entities like Facebook, Google, and Twitter cannot be sued for First Amendment violations because merely hosting speech does not transform a platform into a government actor. It follows that the particular content moderation practices of these platforms at issue, labeling content, fact-checking, and demonetization, do not amount to censorship as contemplated by the First Amendment and instead are protected from governmental intrusion.

    Working in tandem with the First Amendment is Section 230 of the Communications Decency Act, which allows platforms to moderate third-party content. (For a full history of Section 230, how the law works, and what the law means, see the Section 230 series on our blog.) The relevant provision of Section 230 protects platforms who voluntarily act in good faith to restrict access to objectionable material. It is the spirit of the free expression that underlies this law, and it should not be weaponized in a way that forces these private companies to give a platform to holocaust deniers, conspiracy theorists, anti-vaxxers, and individuals pushing blatant disinformation.

    This is not to say that platforms’ voluntary content moderation policies and practices are perfect — most are far from it. However, numerous studies have shown conservatives are not the group most harmed by poor content moderation practices. In fact, right-leaning pages on Facebook frequently top moderate and left-leaning pages in terms of engagement. The reality is marginalized communities are disproportionately harmed by seemingly neutral content moderation policies. Despite this issue, poor content moderation practices that do not suppress or prohibit speech are not akin to censorship, and bad faith claims by conservatives that they do have a negative impact on the discourse surrounding the issue.

    Public Knowledge believes the best solution is content-neutral policies that promote greater transparency, accountability, and due process for content moderation practices that can empower user choices between these dominant platforms and smaller options. This includes bipartisan, pro-competition ideas like promoting interoperability between platforms and market incentives like a “superfund fund for the internet” developed to counter misinformation and uplift local journalistic organizations and information analysts.

    Tuesday’s hearing should be an opportunity to discuss real ways these platforms can improve their content moderation practices and community standards, and to promote greater user choices through competition policy, if the user dislikes the platform’s community standards. Instead it will likely be used as another opportunity to continue cries of “censorship” and browbeat Mark Zuckerberg and Jack Dorsey into allowing misinformation to spread uncontrollably on their platforms. Users of these platforms deserve far better than this.

     

    Image credit: Online Speech by ProSymbols from the Noun Project