Section 230 is, as they say, coming around again. Like clockwork, pique against the dominant digital platforms reaches a crescendo, based on the latest anecdote, constituent letter, political change-over, or whistleblower report. In response, policymakers call hearings, issue angry social media posts, and threaten to reform or repeal Section 230, the industry’s coveted liability shield.
Over the past few weeks, in conjunction with a hearing on the topic of children’s safety online, a bipartisan group of Senators introduced a bill that would sunset Section 230 in two years. We have also heard that Senators soon plan to reintroduce the STOP CSAM Act as well as potentially the EARN IT Act, two bills calling for Section 230 carve-outs that Public Knowledge assessed negatively in the 118th Congress. Proposals like these are generally rooted in a sincere desire to encourage platforms to more effectively moderate harmful or exploitative content. But they have highly undesirable unintended consequences. In some cases, they would make it untenable for platform users to express themselves freely online, as well as for platforms to host that expression.
Recent threats to Section 230 have also come from the Federal Communications Commission (FCC), but for entirely different reasons. Incoming Chair Brendan Carr, as part of a broader campaign to get platforms to do less to moderate content, said in his chapter of Project 2025 that the FCC should “interpret” Section 230 in ways that narrow its protections. Specifically, Chair Carr expressed the view that the FCC should determine whether platform content moderation actions are done in “good faith”, as a means of justifying the platforms’ liability shield. (During President Trump’s first administration, he issued an executive order that requested, among other things, that the FCC “expeditiously” propose regulations to “clarify” Section 230 in several respects. The National Telecommunications and Information Administration filed a rulemaking, which Carr also supported, in response to the Executive Order.) Sure enough, recent media reports have indicated the FCC may be considering an advisory opinion on Section 230, notwithstanding the agency’s utter lack of power to authoritatively interpret or modify the law. (Recent Supreme Court decisions – both Loper Bright and West VA v. EPA – have narrowed whatever authority agencies might have had to interpret law without Congressional direction.)
For more information: Readers can access the NTIA rulemaking petition from the first Trump administration. You can also read Public Knowledge’s assessment of the executive order and of the FCC’s authority under Section 230.
Public Knowledge obviously agrees that the losses that communities, families, parents and individuals have experienced as a result of digital platforms’ circulation of harmful content are heartbreaking. But repeal of Section 230 – which does as much to protect users’ free expression online as it does to protect platforms from legal liability – is not the answer. In this post, we review why Section 230 is so important to protect users’ free expression online, and offer two opportunities for meaningful reform.
A Quick Refresher on Section 230
Section 230 of the Communications Act of 1934, 47 U.S.C. § 230, provides immunity from liability as a speaker or publisher for providers and users of “interactive computer services” that host and moderate information provided by third-party users. It’s best known – and reviled – for insulating dominant platforms like Facebook, X and YouTube from lawsuits. But it also applies to newspapers with comment sections, business review sites like Yelp, and every other online service or website that accepts material from users. And it applies to users themselves, which is why you can’t be sued for others’ content you repost on social media.
But there are other important ways in which Section 230 protects users’ free expression online. Section 230 both encourages platforms to moderate content according to their own terms of service and community standards, and discourages over-moderation of user speech. The first of these – encouraging content moderation – ensures that users are not drowned out or silenced by online harassment, hate speech, and false information narratives. The second – discouraging over-moderation of online speech – may seem counter-intuitive. But this effect is rooted in a painful truth: the digital platforms will always act in their own financial interests. If there is legal, political, reputational or any other kind of risk associated with a particular kind of content, they will moderate it aggressively. Research consistently shows that content from communities of color, women, LGBTQ+ communities, and religious minorities will be the first to be removed, downranked, or demonetized.
For more information: There are a lot of misunderstandings about Section 230, ranging from false distinctions between “platforms” and “publishers” and whether Section 230’s liability shield depends on “good faith,” ideologically neutral, or other forms of moderation. For more about these, we refer readers to this blog post.
Because of its critical role in protecting free expression online, any proposal for reforming Section 230 must be extremely thoughtful, focus on tailored solutions that target a specific harm, and minimize unintended consequences (like over-moderation, or tacit elimination of encryption which facilitates private communication for users). It’s not an easy task: as late-night comedian John Oliver has noted, “I have yet to see a proposal [for Section 230 reform] that couldn’t easily be weaponized to enable political censorship.” Many of the proposals we have seen also introduce the potential for variable enforcement, with judicial decisions whipsawing depending on who appointed the judge and whether the platform in question is X or Bluesky. And in almost every case, the proposals have unacceptable unintended consequences.
For more information: Public Knowledge has published a set of principles for lawmakers and others interested in developing or evaluating proposals to alter Section 230. We have also created a scorecard designed to assess specific legislative proposals against those principles (117th Congressional scorecard here, 118th Congressional scorecard here). The scorecards highlight that the problems Section 230 reform proposals seek to address are often really competition-related, privacy-related, or seek to address other problems.
We have two proposals for Section 230 reform we believe can pass our own tests:
- Remove the platforms’ liability shield for paid advertising content, and
- Remove the shield for product design features that are neither third party content nor the platform’s own expressive speech.
Since no writing on technology policy these days is complete without a reference to artificial intelligence (AI), we also offer a note to clarify that Section 230 does not extend to the outputs of generative AI.
Remove Section 230’s Liability Shield for Paid Advertising Content
Again, we strongly support the free expression benefits users gain from Section 230. But paid commercial ads are not the same thing as users’ free expression. Advertisements are subject to a lesser standard of First Amendment scrutiny and are already subject to more restrictions and regulations regardless of the media channel in which they are delivered.
Ads are the result of a business relationship; platforms choose to carry this content and profit from doing so. Ads are generally disclosed or labeled or published in a consistent placement or format that distinguishes them from other content (though we’re not opposed to stronger disclosure requirements, as well). And every major platform already subjects paid ads to some kind of screening process. Removing Section 230’s liability shield from a category of content where there is a business relationship and a clear opportunity to review content prior to publication would incentivize platforms to review this content and reduce harms more vigorously.
For more elaboration on the removal of Section 230 liability protections for paid ads, including a review of the trade-offs it creates, see this article.
Clarify That Section 230’s Liability Shield Does Not Cover Product Design That Is Neither Third Party Content Nor the Platform’s Own Speech
Awareness of the dominant platforms’ advertising-based business model and its potential for creating harm (like compulsive use, unhealthy behaviors, social isolation, and unsafe connections) has grown over the past 10 or so years. An advertising-based business model demands that platforms distribute content using algorithms optimized for their ability to sustain users’ attention – attention that is sold to advertisers in the form of advertising inventory. But plaintiff after plaintiff claiming harm from algorithmic distribution of content have correctly seen their cases dismissed – or ultimately lost – on one of two grounds. The first is the Section 230 liability shield. The second is the First Amendment, which gives platforms their own expressive rights in content moderation. Bills in Congress designed to influence or regulate platform content moderation have also failed – as they should – because they are contradictory to Section 230 or to the platforms’ own expressive rights under the First Amendment.
However, more current court cases have begun to differentiate between algorithmic curation of user content (which enjoys the protections of Section 230 and the First Amendment) and product design features that are rooted in the platforms’ ad-based business model. In these cases (over 200 of them), judges have sometimes denied the usual motions to dismiss based on Section 230, finding that alleged product design defects do not impose any duty to monitor, alter, or prevent the publication of third-party content. These judges are finding that certain product design features are content-neutral; that is, they can create harm to users without regard to the type or nature of the content being distributed. This introduces the possibility of platform liability for product design driven by the platforms’ business model. However, so far, judges have not always agreed on which design features meet these criteria. In 2021, in Lemmon v. Snap, judges determined that it was one of Snapchat’s own product features – a speed filter – and not user content that had created harm. More recently, in one multi-district litigation the judge identified a range of “non-expressive and intentional design choices to foster compulsive use” and denied several of the defendants’ motions to dismiss plaintiffs’ personal injury claims due to Section 230 or the First Amendment. These include ineffective parental controls, ineffective parental notifications, barriers that make it more difficult for users to delete and/or deactivate their accounts than to create them, and various types of filters that are designed by the platform. Other judges have pointed to autoplay, seamless pagination, and notifications on minors’ accounts (Utah); and a variable reward schedule, alerts, infinite scroll, ephemeral content, and reels in an infinite stream (Washington, DC).
We agree that some product design features are rooted in the platforms’ ad-based business model, are not expressive on the part of the platform or its users, and should be the basis of product liability on the part of platforms. Targeted reform of Section 230 by Congress could be used to establish which types of product design lie outside the protections of the intermediary liability shield because they do not entail content moderation or curation. We acknowledge that theoretically, conduct apart from hosting and moderating content is already outside the scope of Section 230. But the sheer number of current court cases listing different product features, and the conflicting decisions arising from them, show the benefit of reform to the current law. Importantly, such reform would not create liability for any elements of product design, but it would allow the case to be made in court.
Targeted reform of Section 230 to advance the theory of product liability for design features could be accomplished while adhering to Public Knowledge’s principles for free expression. One principle states that Section 230 already does not shield business activities from sensible business regulation. Another principle is that Section 230 was designed to protect user speech, not advertising-based business models (which most of these product design features are meant to advance). A third principle states that Section 230 reform should focus on the platforms’ own conduct, not user content. Great care would have to be taken to distinguish between the platforms’ business conduct and their own expressive speech (as well as speech of users). But if done well, this would be a content-neutral approach to Section 230 reform that would withstand First Amendment scrutiny.
For more information on the platforms’ ad-based business model and the product liability theory, see Part III: Safeguarding Users of Public Knowledge’s “Policy Primer on Free Expression and Content Moderation.”
A Note on Section 230 and Generative AI Outputs
Recent court cases and legislative proposals have introduced the question of whether Section 230 applies to generative artificial intelligence, including large language models. The authors of Section 230, Senator Ron Wyden and former Representative Chris Cox, maintain that it does not. However, the potential ambiguity of the question has led to at least one legislative proposal to ensure that answer is definitive.
In our view, Section 230 does not apply to normal generative AI outputs. (Of course our view may be different if a user directly prompts an AI to produce specific content.) AI is a tool, not a third-party user. And generative AI models don’t merely publish, or republish, content from other sources: the third-party content they use for training is transformed by the model. In many cases, AI firms also apply filters or alter their models’ outputs to disallow a user from violating their own rules. That means they are further shaping the content that users see as outputs. So, in the words of the statute, an AI developer is at a minimum “responsible … in part” for the output of the systems it creates. This is enough to take it outside the scope of Section 230.
And Section 230 shouldn’t, in our view, apply to generative artificial intelligence, as AI firms race each other to market without exhaustive assessment and mitigation of their potential risks. This is particularly the case as a new presidential administration clearly favors acceleration of AI development over safety or security. Until or unless there is a governance framework to oversee and regulate AI technology, the courts will be a necessary check on dangerous products.
We’re sensitive to the idea that large companies can better bear litigation costs than small companies, and that liability shields can promote competition. Smaller firms and new entries may struggle to bear the cost of compliance and litigation. However, litigation costs alone are not likely to be the decisive factor in market entry for AI start-ups or new AI firms. This industry sector already requires massive investments in training sets and energy and computing power, even for open models; these provide other, larger barriers to entry.
For a more exhaustive discussion of why Section 230 does not, and should not, apply to generative AI, see this previous Public Knowledge post.
Look to Solutions Other Than Section 230 Reform to Introduce Platform Accountability for Content Moderation
Even if adopted, these options for targeted reform and clarification of Section 230 will not be sufficient to fully address free expression and content moderation on platforms. We advocate for a range of other solutions in regard to platform content moderation to prevent or mitigate user harm. These include requirements for algorithmic transparency and due process, user empowerment, other product design standards, and competition policy to improve user choice. We have also supported legislation that calls for government agencies to study the health impacts of social media and translate those findings into evidence-based policy. Lastly, we favor a dedicated digital regulator that would have in its scope oversight and auditing of algorithms that drive people to specific content.
For a review of other policy solutions in regard to content moderation, read Public Knowledge’s “Policy Primer for Free Expression and Content Moderation” here. For more information on how to design a digital regulator to rein in Big Tech, see this recent paper.