Introduction: A New Vision for Free Expression and Content Moderation
A lot has changed since Public Knowledge began a dialogue almost seven years ago about the role of dominant digital platforms in public discourse. Our earliest analysis (like that of many other civil society groups) focused on the concern that platforms would moderate user content too much, or without due process for users. At that time, almost daily news reports recounted how content moderation decisions – such as disabling user accounts or removing or “de-monetizing” posted content – left users at a disadvantage. Users found themselves with no recourse and no alternatives because many platform markets were not (and are still not) competitive. While we shared civil society concerns about the hate speech and harmful rhetoric already swirling on platforms, we focused our own analysis on gatekeeper power. The “Santa Clara Principles,” unveiled in 2018, constituted one of the first comprehensive frameworks proposed by collective civil society organizations to create accountability for internet platforms’ content moderation. They, too, focused on ensuring user rights and called for a highly restricted role for the government in shaping platforms’ content moderation approaches.
With this new four-part blog series, Public Knowledge unveils a vision for free expression and content moderation in the contemporary media landscape. Our goal is to review important changes in the media, technological, political, and legal landscape – including, most recently, a significant political backlash to the study of disinformation, and the first few Supreme Court cases entailing the role of government in platform content moderation – and describe how policymakers should think about content moderation today. Most importantly, we will frame the appropriate policy interventions to ensure the right balance between free expression and content moderation while guaranteeing citizens the information they need to enable civic participation. We will focus on social media and entertainment platforms that distribute user-generated content, search, and non-encrypted messaging channels, all relying on algorithmic curation.
In this post, Part I: Centering Public Interest Values, we provide a brief historical perspective on platform content moderation, review the values that Public Knowledge brings to this topic, and discuss the importance of rooting content moderation approaches and policies in user rights. We also consider our first theory related to content moderation: that user rights should include the right to hold platforms liable if they don’t enforce the community standards and/or product features they contract for in their terms of service.
In Part II: Empowering User Choice, we discuss the structure of digital platform markets and the necessity of policy choices that create healthy competition and user choice. We also center digital platforms in the broader ecosystem of news and information, and discuss how policy interventions may offset the impact of poor platform content moderation on the information environment by promoting other, diverse sources of credible news.
In Part III: Safeguarding Users, we discuss an additional array of policy interventions designed to bring about content moderation that respects the imperative for free expression and is in the public interest. These include a product liability theory, limiting data collection and exploitation, and requirements for algorithmic transparency and choice.
In Part IV: Tackling AI and Executing the Vision, we discuss the implications of the new “elephant in the content moderation room,“ generative artificial intelligence, for free expression and content moderation. We also discuss how our recommended policy interventions can be made durable and sustainable, while fostering entrepreneurship and innovation, through a dedicated digital regulator.
Readers will note that in the interest of brevity and clarity, we have chosen not to describe or link any of the hundreds of thousands of incidents and articles available to us to highlight the failures of digital platforms to effectively moderate violative content on their sites, including hate speech, harassment, extremism, disinformation, non-consensual intimate imagery, child sexual abuse material, and other toxic content. We assume any reader engaging in a series such as this will already be familiar with this context and bought into the imperative to forge policy solutions that protect the benefits of digital information distribution platforms while mitigating their harms. Readers who are looking for more information about content moderation can visit our issue page, read more about the harms associated with algorithmic curation of content, and explore why multiple policy solutions will be required to ensure free expression and effective content moderation.
A Very Brief Historical Perspective on Content Moderation
In the early, utopian days of the Open Internet, informal norms and social codes were sufficient to maintain civility in online communities. The earliest computer networks connecting Department of Defense researchers and universities served a highly homogeneous population with similar values. These users could view and experience the online landscape as open, decentralized, democratic, and egalitarian. The preference was for moderation by the community itself, using collaboratively developed norms instead of centralized rules. This self-moderation approach was relatively easy to accomplish when users comprising the same or similar groups of people largely brought the same lived experiences and worldviews to small and highly specific online forums. But the harmonious homogeneity was short-lived: The introduction of the Mosaic and then Netscape Navigator “browsers” (among other technical developments) brought new audiences, non-institutional computers, and new perspectives onto the internet in droves. By 1995, Netscape Navigator had about 10 million global users.
After several conflicting judicial outcomes about companies’ liability for third-party content on their services, Congress passed a new law: Section 230. (It was originally part of a far broader piece of legislation focused on the distribution of pornography, the Communications Decency Act of 1996, most of which was struck down). Section 230 is one of the most consequential – and misunderstood – provisions governing the internet. It shields online services from liability when managing third-party content on their platforms. By doing so, Section 230 allows users to express themselves freely without the threat of over-moderation by online services seeking to reduce their own legal liability.
As internet access widened, it meant that a much larger cross-section of people could be in the same dialogue – and they brought with them different lived experiences, values, and views. Online forums were seen by young activists as the antidote to corporate consolidation in media, increasing suppression of the social justice and anti-war movements, and other political forces. There was an explosion of creativity – especially among communities of color marginalized by established media channels – and the Open Internet’s potential for aiding affinity groups and creators to connect, mobilize, and innovate became real.
But the democratic promise of the Open Internet also soon came to be compromised by vitriol, harassment, hate speech, and other forms of online abuse, requiring new forms of content management in order to maintain civility in online communities. Despite their extraordinary contributions to the creation of the internet, its supporting technology, and the connected networks that brought it to life, Black and women users were subject to some of the most violent abuse. In their concomitant quest for scale, a new form of online service provider – platforms – centralized content moderation and sought to make it more efficient. The platforms rising to dominance, most notably Google and Facebook, adopted an advertising-based business model, which encouraged distribution of content based largely on its profit potential. Bad actors rushed in to exploit the economics of provocative and extreme content.
Content moderation took on new urgency in the 2010s with the growth of social media and the speed and ubiquity of the mobile web. “Gamergate” (in 2014) demonstrated how focused online communities could orchestrate devastating harassment campaigns and “Pizzagate” (in 2016) demonstrated the destructive power of online conspiracy theories. Thanks to the work of researchers and journalists, we learned more about the people, rules, and processes that made up the systems of governance of the dominant platforms – “the new governors” of online speech – and “Trust and Safety” became a legitimate career path. Infinite scrolling, notifications, an explosion in video content enabled by 4G networks, and other aspects of the mobile web compounded the ease, scale, and velocity associated with the sharing of content. We learned through the Cambridge Analytica scandal how our personal data could be “harvested” without our informed consent and used for highly targeted distribution of political (and other) ads. Ultimately, thanks to the COVID-19 pandemic and the 2020 election, we also learned about the horrifying real-world harms that could come from platforms’ failure to manage strains of misinformation and disinformation effectively. During this same time period, both political and social polarization in the United States increased dramatically.
Through it all, different stakeholders criticized the new speech governors’ efforts as being too much. Or too little. Naive. Or corrupt. Politicized. Or indifferent. In an attempt to avoid scrutiny, platforms evolved and re-evolved their content moderation policies, experimented with community-centered approaches, and funded initiatives to make content moderation more independent. A dangerous new counter-narrative put forward by those who use disinformation as a potent political tool led to hearings and court cases claiming any government efforts to collaborate with platforms in the interest of national security and public health were “censorship.” Some of these challenges have reached the level of the Supreme Court (where the claims were rejected). Academic institutions and civil society organizations focused on understanding and mitigating disinformation narratives faced expensive lawsuits and lost a lot of their funding and their talent. All the while, Americans were losing news organizations that use ethical professional techniques to source, verify, and correct their content. Now, citizens are losing faith in not only the free press but also many democratic institutions.
And despite hearing after hearing and wave after wave of legislative proposals in Congress – to ostensibly reform the industry’s Section 230 liability shield, regulate algorithms, protect privacy, ensure election integrity, “rein in Big Tech,” and save the children from harm – with one exception, Congress has not passed a single material law regarding platform liability. (That exception, SESTA-FOSTA, the combined package of the Stop Enabling Sex Traffickers Act and the Allow States and Victims to Fight Online Sex Trafficking Act that passed Congress in early 2018, is a case study in unintended consequences and demonstrates the need for a nuanced approach to platform regulation.)
Bringing Public Interest Values to Free Expression and Content Moderation
One thing that hasn’t changed since Public Knowledge’s first analysis in this space is the set of core values that we bring to the discussion. Since our founding over 20 years ago, whether the topic is intellectual property or telecommunications or the internet, our bedrock has been the value of free expression, including individual control and dignity. But we also value safety, both for individual communities online and for the safety of the conversation itself, including by ensuring privacy through technologies like encryption. We also bring a core value of equity – that is, in a pluralistic society with diverse voices, how do we ensure equitable access to the benefits of technology, including the chance to speak? We advocate for marketplace competition, which helps promote consumer choice of avenues for expression. And we explicitly seek to support, not undermine, democratic institutions and systems.
Such are the values that must be balanced to create content moderation in the public interest. If anything, these ideals have become more important at a time when democratic backsliding is happening in the United States and around the world. From the beginning of the American experiment, civic information and the ability to both express and hear differing views have been among the pillars of democracy, and both free speech and a free press have been protected rights. But both expressing and hearing diverse or differing viewpoints require civility. We believe the government has an affirmative responsibility to promote an environment that allows this civility, and to further a competitive marketplace that encourages a diversity of views. Unmoderated harassment and hate speech deter the speech rights of some, and the greatest impact invariably falls on already marginalized communities. These online scourges are also incompatible with the principles of a multi-racial democracy, civil rights, and social justice. Simply put, we’ve learned that free expression for all requires content moderation.
But better content moderation is also about capitalism and free markets. Unmoderated platforms may serve a specific demand among a subset of internet users, but they can also lack commercial value, as we have recently witnessed in the reduced ad dollars funding X, formerly Twitter. Content moderation standards have the potential to be the platforms’ principal means of competitive differentiation, especially if pro-competition policies like interoperability – which we favor – diminish the importance of network size. And even an “unmoderated” platform is not politically neutral in its impact (we’re looking at you, X).
Rooting Content Moderation Policy in User Rights
As we’ve noted, Public Knowledge’s earliest analysis of platform content moderation focused on ensuring user rights, specifically the concern that platforms would moderate content without due process for users. Our perspective was – and remains – rooted in the most basic of constitutional rights, including those found in the First, Fifth, and Fourteenth Amendments. Today, users’ rights on platforms are bounded by the platforms’ terms of service, which represent contracts between users and the platforms. However, these terms of service are mostly designed to give the platforms expansive rights, including the right to use all posted or shared content without being liable to the user, and to collect, use, and potentially share extensive user data. They also generally require users to use an arbitration process to resolve disputes. At a bare minimum, users should be able to understand the terms of these agreements, understand what they imply in terms of online experience, and expect platforms to enforce them consistently, including providing due process rights for action on content.
A Consumer Protection Theory of Content Moderation
One theory rooted in user rights goes further, holding that platforms should be held liable for defrauding users if they don’t consistently enforce the community standards and/or product features they contract for in their terms of service. Infringements would include failing to moderate content that violates the platform’s stated community standards, or not enforcing product features such as parental controls. This consumer protection theory calls upon the Federal Trade Commission and other consumer protection regulators to enforce the contracts the platforms already have with their users. Under its Section 5 authority, the FTC could sue companies that defraud users by violating their own terms of service contracts. (The FTC has already sued Facebook for violating the privacy promises it makes in its terms of service.) The FTC could also use its rulemaking authority to define how platforms must spell out and enforce their terms of service.
Users are beginning to pursue these rights in the courts. Recently the Ninth U.S. Circuit Court of Appeals accepted an argument that YOLO, a Snapchat-integrated app (since banned on the platform) that let users send anonymous messages, misrepresented its terms of service. The panel “held that the claims [of the plaintiff, the family of a teen boy that committed suicide after threats and harassment on Snapchat] survived because plaintiffs seek to hold YOLO accountable for its promise to unmask or ban users who violated the terms of service, and not for a failure to take certain moderation actions” (which would have been protected by Section 230). (Conversely, the panel rejected the plaintiffs’ argument that YOLO’s anonymous messaging capability was inherently dangerous, under a product liability theory we discuss in Part III: Safeguarding Users).
Although Public Knowledge is generally supportive of the consumer protection theory, we recognize it has some pitfalls. For example, users or government officials could try to hold a platform accountable because they disagree with how the platform has interpreted or applied its terms of service. However elaborate or detailed the platform’s rules may be, a term like “hate speech” is subject to interpretation. Content moderation is inherently subjective, and the consumer protection theory could be misapplied. But at the same time, in our vision, user rights in regard to free expression and content moderation should extend beyond simply understanding what the platforms can do with users’ content (and data), and expecting the platforms to explain and comply with their own rules. For example, we would favor rights for users to file individual appeals to the platforms to challenge their content moderation decisions.
Despite our emphasis on user rights, we do not believe that users have a right to publish on any particular private platform, nor do they have a right to be amplified algorithmically. (As Aza Raskin of the Center for Humane Technology first noted, “freedom of speech is not freedom of reach.”) In fact, platforms have their own expressive rights that are reflected in the communities they create through content moderation. They have the legal capacity to determine what is and is not allowed on their feeds and establish guidelines for acceptable posts via their terms of service. Users who abuse platforms in defiance of their community standards should, of course, face consequences, including being cut off from the platform when appropriate. But they should also know why they are being cut off, and the right of due process is still required.
Learn more about empowering user choice in Part II.