Every year Congress rediscovers Section 230 and decides it’s the reason children are unsafe online. The frustration is real. Platforms have spent years designing products that predictably expose kids to harm. But the proposed solution is always the same: repeal or gut Section 230 and hope the internet becomes safer through liability alone.
That approach won’t work. Repealing Section 230 would not address the design-driven harms lawmakers are rightly worried about. Instead, it would destabilize the legal framework that allows platforms to moderate at all, while leaving the most dangerous platform design choices largely untouched.
Section 230 is often framed as a sweeping immunity that shields platforms from accountability. In reality, it does something much narrower. It prevents online services from being held liable as the publisher or speaker of user-generated content, while allowing them to remove or restrict that content without incurring liability for trying.
This is an important distinction, because dismantling Section 230 will not fix the problems lawmakers actually want to solve. Repeal Section 230, and platforms are forced into one of two bad choices.
Option One: Over-Moderation Backed by Constant Surveillance
One likely response to Section 230 repeal is aggressive, risk-averse moderation. Faced with the threat of lawsuits, platforms would remove anything that could plausibly trigger claims, not because it is harmful, but because it is safer to delete first and ask questions later. That kind of moderation doesn’t require careful judgment. It relies on automated filters and broad content bans that inevitably sweep in lawful, valuable speech. As a result, speech about mental health, sexuality, gender identity, abuse, or sexual health education – the very conversations where young people often seek support – are among the first to disappear.
And enforcing these systems requires monitoring. To minimize risk, platforms would be incentivized to scan more content, retain more data, and document user behavior to prove compliance. Private messages become less private. Youth activity becomes more closely tracked.
This does not happen in a vacuum. Platforms already operate surveillance based business models built on data extraction and engagement tracking. Weakening Section 230 would layer legal incentives on top of those economic incentives, accelerating the normalization of pervasive monitoring in the name of safety. That is not protection. It is institutionalized surveillance.
Option Two: No Content Controls at All
The second option platforms could choose is the opposite and just as harmful. Without Section 230, moderation itself becomes evidence of responsibility and knowledge. The more a platform intervenes, the easier it is to argue that it should have done more. In that legal environment, the safest strategy may be to do as little as possible and claim to be a neutral conduit.
That means fewer rules, fewer interventions, and fewer safeguards. Harassment, grooming, exploitation, and abuse become harder to address because the platform has deliberately stepped back from involvement. For kids, that creates online spaces with little structure and even less accountability.
Repealing Section 230 doesn’t force platforms to moderate better. It can just as easily encourage them to stop trying.
The Real Problem Is Not Content. It Is Design.
Lawmakers’ frustration with platforms is not really about whether they are deleting the right posts. It is about the fact that many platforms are intentionally designed in ways that predictably harm children.
Those harms include features that facilitate exploitative contact with strangers, recommendation systems that amplify risky behavior, endless feeds engineered to maximize engagement, dark patterns that discourage disengagement, and surveillance based advertising models that monetize youth attention.
These are product design choices, not third party speech. And critically, Section 230 generally does not shield platforms from liability for harms caused by their own designs. The claim that Section 230 categorically prevents accountability for harms to children is increasingly inconsistent with how courts are actually applying the law.
In Lemmon v. Snap, the Ninth Circuit made that clear. The case did not turn on user generated content at all. Instead, the court focused on Snapchat’s own product design, specifically the Speed Filter feature, which allegedly incentivized reckless driving. The plaintiffs’ claims sounded in traditional product liability and negligence rather than publication or editorial decisions. Because the alleged harm flowed from Snapchat’s design choices rather than third party speech, Section 230 did not apply.
Lemmon confirms that Section 230 does not bar claims where the duty arises from a platform’s own conduct, the alleged defect is a product feature or design decision, and liability does not require treating the platform as the publisher or speaker of user content. Courts are now applying that same logic on a much broader scale.
In the ongoing multidistrict litigation over social media addiction, as well as a separate suit happening in Los Angeles Superior Court, judges have carefully parsed complaints to separate claims based on third party content, which Section 230 may shield, from claims based on platform architecture, algorithms, and engagement driven design, which it does not.
As a result, substantial portions of these cases have survived motions to dismiss. Claims alleging defective design, failure to warn, negligence, and unfair competition are moving forward because they target platforms’ own affirmative choices, including infinite scroll and engagement optimization, rather than user speech.
This reflects a more disciplined and increasingly common judicial approach to Section 230. Courts are preserving protection against publisher liability for user expression while leaving platforms exposed to liability for the foreseeable harms of their products.
Repealing Section 230 would not strengthen these methods of accountability. It would destabilize them by collapsing the careful distinction courts are drawing between speech and product and replacing it with legal uncertainty that incentivizes over-censorship or disengagement from safety efforts. But Section 230 can certainly be updated to be more responsive to how online platforms have evolved in the last thirty years. In particular, we believe Congress has an opportunity to clarify which design features are excluded from the intermediary liability shield, providing victims of platform harm with a clearer path for liability claims in court.
Repeal Is a Distraction From Real Solutions
Repealing Section 230 confuses two different problems: failures of content moderation and harms caused by exploitative product design. Gutting the law will not fix the latter, and it risks destroying the legal space that makes the former possible.
If lawmakers want to protect children online, there are tools that address these harms directly. Strong privacy protections that limit data collection and targeted advertising. Restrictions on manipulative and addictive design features. Age-appropriate design requirements grounded in child development. Enforcement mechanisms that do not rely on monitoring kids’ speech. As we outline in our recent paper, these approaches regulate incentives and architecture.
Section 230 is an easy scapegoat for complex failures. But children’s safety online is not a loophole problem. It is a design problem, a business model problem, and a regulatory willpower problem.
Repeal Section 230, and platforms will not suddenly become responsible stewards. They will become more afraid. And fear produces bad systems: blanket censorship, constant monitoring, or total abdication of responsibility.
The law already allows us to hold platforms accountable for harmful design. If Congress is serious about protecting kids online, it should stop trying to break the internet’s speech infrastructure and start regulating the systems that actually cause harm.