We are nearing the 30th anniversary of the Communications Decency Act of 1996, in which lies Section 230, the infamous internet liability shield opponents believe is the major roadblock to making platforms a safer, healthier space for free expression. Responding to incessant calls to hold social media platforms accountable for harm, a bipartisan, bicameral group of lawmakers have championed repealing or reforming Section 230. The most recent charge is driven largely by kids’ online safety debates, and is being led in the Senate by Senators Lindsay Graham (R-SC) and Dick Durbin (D-IL) with support from a bipartisan group of 8 other lawmakers. In the House, Representative Harriet Hageman (R-WY) introduced her own Section 230 sunset bill to tackle what she perceives as “liberal Silicon Valley bias” in platform content moderation. Most recently, Representative Jimmy Patronis (R-FL) introduced the Promoting Responsible Online Technology and Ensuring Consumer Trust (PROTECT) Act, legislation to repeal Section 230, coming from the angle of holding platforms accountable for harm to children.
To counterbalance the deluge of anti-Section 230 rhetoric from our lawmakers and online safety advocates alike, we find it’s important to reiterate the positive impacts of Section 230 (and no, they are not just limited to the liability protections Big Tech platforms may enjoy.) You can read about the legal nitty-gritty of what Section 230 is and does from Public Knowledge’s legal director John Bergmayer. But as a brief refresher, Section 230 makes it so online platforms are not held liable for third party content they host, including illegal or defamatory speech. It also makes it so platforms enjoy protections for moderating user content. Section 230 does not remove users’ responsibility for their own content. If someone posts illegal content like child sexual abuse material, that user can still be held liable for producing and distributing such content.
To understand how Section 230 protects users’ expression, it’s important to understand that platforms, like most businesses accountable to shareholders, will always act in their own financial self-interest. If through Section 230 repeal, platforms were made liable for the third party speech they host, they could become hyper-cautious about hosting anything controversial, especially content criticizing powerful people with expensive lawyers and expansive notions of what constitutes defamation. Or alternatively, platforms might abandon content moderation altogether and claim ignorance of any defamatory, false or otherwise problematic content on their sites. Under First Amendment precedent, a platform can only be held liable for falsehoods if they had intent or knowledge of wrongdoing, creating a perverse incentive to avoid learning what their users are posting. Large platforms like Meta know this, and in fact would benefit from a regulatory environment that allows Meta to both spend far less on content moderation, and otherwise weather the liability lawsuits over user content that smaller firms cannot afford. A win-win for Meta: less of an incentive to moderate content combined with fewer potential competitors that could chip away at the social media behemoth’s market dominance.
In the end, it’s not Big Tech that suffers from a Section 230 repeal – it’s you. Here’s what Section 230 actually protects, and who would be impacted the most by its repeal.
Your Local News Website and Journalism Depends on It
While lawmakers rail against Big Tech, they rarely mention that their hometown newspapers rely on the same protections. Local news outlets across the country maintain online comment sections, community forums, and user-submitted content that would become legal minefields without Section 230. Regional papers and even independent journalists on Substack use reader engagement channels like these to build community and drive subscriptions. Without Section 230, these outlets would face a choice to either disable all user interaction, or hire legal teams they can’t afford to review every single comment before publication.
Many of the same lawmakers pushing Section 230 reform also claim to champion local journalism and bemoan news deserts in their districts. But weakening Section 230 would force struggling local outlets and independent journalists to choose between community engagement and existential legal risk. When a commenter posts something potentially defamatory in the comments section of a local news story about city council corruption, should the online newspaper be held liable? Section 230 says no.
Small Platforms and New Market Entrants Can’t Succeed Without It
At Public Knowledge, we reiterate how many of the harms caused by social media would be tempered by mitigating the monopolistic control major platforms have over their users and markets. But if there are to be viable alternatives, there must be a regulatory environment that gives startup platforms a fighting chance. Section 230’s liability shield provides crucial protections for startups during the cash-strapped start up phase.
Content moderation is an inherently complicated business, and no platform will get it right 100% of the time. We want new market entrants to try different content moderation methods – like Bluesky’s approach of empowering users to choose their own moderation settings, or X implementing a community notes model. And when these new entrants inevitably make a wrong move – like a controversial deplatforming decision, or allowing illegal content to slip through the cracks – they should be afforded the ability to right the ship. In this way, if the small platform fails, it fails on the merits of its product in the free market rather than its capacity to weather expensive litigation (like Meta or Google).
It Insulates Platforms from Political Pressure to Moderate According to Viewpoint
We’ve already witnessed how major platforms bend to accommodate the party in power by revising their policies to align with the administration’s preference. The transition from Biden to Trump made this dynamic impossible to ignore, with Meta and Google rolling back or revising their hate speech policies. Without Section 230’s liability protections, this tendency toward political accommodation would intensify dramatically. Platforms could face direct liability for user-posted content, making them vulnerable to defamation lawsuits, state attorney general actions under consumer protection laws, and potential FTC enforcement claims that hosting certain content violates platforms’ own stated safety commitments.
For example, right now, with an FTC hostile to the transgender community, a Section 230 repeal could mean platforms preemptively removing content about gender-affirming care rather than risk being accused of facilitating harm to consumers. Flipping the scenario, it could also mean a Democrat-run White House could do far more than request Mark Zuckerberg to moderate pandemic related falsehoods on Facebook. They could signal that hosting such content violates FTC consumer protection standards, and platforms would face the choice between moderating according to the government’s preferred approach or defending themselves in expensive litigation they might ultimately win but can’t afford to fight.
Platform Accountability is Possible Without Section 230 Repeal
There’s a reason why Section 230 repeal attracts bipartisan support: the desire for online platform accountability is genuine and widespread, and it drives much of our work at Public Knowledge. But there’s also a reason the opposition is equally bipartisan: because anyone who thinks seriously about the mechanics of platform content moderation understands that repeal would restrict everyone’s speech, not just those posting harmful or illegal content.
Section 230 protections are crucial for online free expression, but this reality does not preclude thoughtful reform. Narrow, targeted changes to Section 230 are possible. Alternative approaches to platform liability exist. Meaningful accountability can be achieved without dismantling the liability framework that makes user-generated speech economically viable online.
In our follow-up analysis, we’ll lay out how platform accountability can work without Section 230 repeal. We’ll categorize the different repeal proposals currently circulating and explain in detail why none of them would deliver the outcomes their sponsors claim to want, namely in the realm of kids’ online safety. The gap between the stated goals of Section 230 repeal and its actual consequences deserves serious scrutiny, and that’s exactly what we intend to provide.