Assessing Section 230 Reform Proposals in the 119th Congress

Lawmakers cite various reasons for supporting proposals to reform or repeal Section 230. But what effect would these have in practice?

Lawmakers have felt the pressure from constituents to ‘stick it to Big Tech,’ and many have become convinced that Section 230 is the singular roadblock preventing platform accountability – especially for harm to kids. Several proposals that have circulated Congress would fundamentally reshape online speech, and they’re often based on misunderstandings about what Section 230 actually does and doesn’t do. Some proposals would strip immunity for platforms that make content moderation choices lawmakers disagree with under the guise of promoting “viewpoint diversity” or “neutrality.” Others would strip liability protections for platforms that use algorithmic recommendation systems to organize content. This blog examines what these proposals would actually change, why the promised accountability wouldn’t materialize the way proponents claim, and what they’d mean for the users that depend on Section 230 for free expression. 

To Prevent Anti-Conservative Bias 

The outrage that major social media platforms were allegedly systematically silencing conservative viewpoints took root about a decade ago – when Facebook’s Trending Topic feature drew backlash for allegedly showing preference for left-leaning news sources. Accusations of anti-conservative bias grew more pronounced around controversies like the Hunter Biden Laptop story, the deplatforming of President Trump, and general moderation of COVID-19 pandemic perceived falsehoods. President Trump himself has repeatedly complained about the alleged censorship of conservative voices online, and has called for the repeal of Section 230 in his speeches and social media posts, going as far as issuing an executive order establishing that the liability shield would be removed if platforms engage in editorial behaviors in their content moderation. 

Republican lawmakers wishing to make it so social media platforms moderate in a “viewpoint neutral” way have pointed to the second piece of Section 230, which reads, “Any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.” The last bit, “otherwise objectionable,” gives platforms the broad leeway to implement content moderation policies beyond objectively illegal content. 

Congresswoman Harriet Hageman (R-WY) introduced a Section 230 repeal bill with specific reasoning that, “the current statute does not define what otherwise objectionable means, nor could it because it is not a common standard. What is objectionable to one person might not be to another. In this absence, Big Tech has defined what is objectionable based on its own beliefs […] with a liberal Silicon Valley bias.” Hageman proposes to replace “otherwise objectionable,” with simply “unlawful,” so that platforms can take care of content that harms children or facilitates terrorism, but leaves everything else untouched. 

Rep. Hageman is correct that “otherwise objectionable” is not a common standard. This is intentional. If platforms were obligated to moderate content according to the exact same “objectionable standard,” platforms would lose a key way of market differentiation. Social media platforms are already obligated to moderate unlawful content. Replacing the “otherwise objectionable” with “unlawful” would mean platforms cannot moderate the “lawful-but-awful” category of speech without facing liability, which could turn all platforms into 4chan and its offshoots – infamous for their permissive policies that allow for hordes of white supremacist content and violent threats. 

The “otherwise objectionable” language allows platforms to curate content policies that reflect different norms and values of those who run the platform and the users they target. To Elon Musk’s X, that means policies that are more permissive of harassment and hate speech under the guise of “absolute free speech.” For others, bullying users off of platforms with targeted harassment and hate speech is not in the spirit of free expression, hence why platforms like Bluesky emphasize a “welcoming environment” by not allowing “harassment, bullying, hate speech, or discrimination.”

The most important limitation making Rep. Hageman and her colleagues’ proposal untenable is the fact that the Supreme Court has affirmed in Moody v. NetChoice that curating content on social media feeds is expressive conduct, and therefore First Amendment-protected. If a social media platform decides to solely show cat videos and take down anything not feline-related, that platform has a First Amendment right to do so. Lawmakers cannot force that social media platform to host canine content for the sake of “viewpoint neutrality,” as “the government cannot justify interfering with a private speaker’s editorial choices merely by claiming an interest in improving or balancing the marketplace of ideas.” 

To Mitigate Harmful Deepfaked Content

President Trump signed the TAKE IT DOWN Act in May 2025: it criminalizes nonconsensual posting of intimate images, including AI-generated deepfakes, and requires platforms to remove such content within 48 hours. Representatives Celeste Maloy (R-UT) and Jake Auchincloss (D-MA) introduced a bill in December of 2025 to add to TAKE IT DOWN by amending Section 230 to condition platform liability protections on meeting a duty of care regarding preventing cyberstalking and abusive deepfakes. 

This bill suffers from the same deficiencies as the TAKE IT DOWN Act – namely, its broad takedown provision that mandates flagged content is to be removed within 48 hours and requires “reasonable efforts” to identify and remove known copies. This provision lacks safeguards against frivolous or bad-faith requests, risking the silencing of lawful satire, journalism, and political speech. The 48-hour removal deadline forces platforms, especially smaller ones, to comply immediately without verifying claims to avoid legal liability. And automated filters used to catch duplicates are notoriously prone to flagging legal content, from fair-use commentary to news reporting. 

The Deepfake Liability Act goes a step further by making Section 230 liability protections conditional on whether the platform provides a “reasonable process for addressing cyberstalking and intimate privacy protections,” including a process to prevent cyberstalking in the first place. Like most other Section 230 reform ideas, hinging liability protections on platforms’ “reasonable” efforts would encourage platforms to moderate any and all potentially violative content. Such a push to overmoderate means more restriction on platform speech and free expression online. 

To Establish Accountability for Algorithms 

The question of whether social media platforms should be liable for feeding, via recommendation algorithms, radicalizing or otherwise harmful content, has been considered in several court cases. Force v. Facebook, for one, alleged that Facebook’s algorithms recommended ISIS content, arguing that algorithmic amplification transformed Facebook from a passive platform into active editorializing. Courts rejected this theory, finding that even targeted recommendations of user content remain covered by Section 230 since they’re still fundamentally about editorial decisions regarding third-party material. We wrote about a similar case surrounding the 2022 Buffalo shooting, where Everytown for Gun Safety brought a lawsuit against social media companies. The lawsuit claimed the Buffalo shooter was radicalized through algorithmic recommendation systems and addictive features that amplified white supremacist content that led to the mass shooting. Plaintiffs attempted to make a claim that use of algorithms to organize content is a design feature and therefore outside the scope of the Section 230 liability shield. The courts denied this theory, and affirmed Section 230 does, indeed, protect use of algorithms in organizing content – as they are ultimately tools to moderate third-party user content.  

Unfortunately, there have been recent acts of violence by chronically online perpetrators who have been found to have been engaging in radicalizing content. The current push to make platforms accountable for the algorithms they deploy was reanimated by the assassination of conservative political figure Charlie Kirk by a young Utah man named Tyler Robinson, who was found to be involved in “dark places of the Internet.”  

Utah Republican Senator John Curtis and Democrat Senator Mark Kelly of Arizona introduced the Algorithm Accountability Act, amending Section 230 to impose a duty of care to make platforms accountable for harm that results from recommendation algorithms – inspired by Kirk’s assassination and Robinson’s apparent chronically-online affliction. 

The Algorithm Accountability Act treats algorithmic recommendations as categorically different from editorial judgment when they’re actually the same thing. When platforms use algorithms to organize feeds, suggest connections, or surface trending topics, they’re making choices about what speech to amplify – choices protected by the First Amendment. Stripping Section 230 protection for these choices makes platforms vulnerable to liability for editorial decisions courts have repeatedly recognized as constitutionally protected expression.

Platforms facing potential liability for algorithmic recommendations would respond predictably by becoming far more aggressive in content removal to avoid any possibility their systems might surface something later deemed harmful. This race to the bottom would disproportionately silence controversial but lawful speech, particularly from marginalized communities whose content is already subject to over-moderation.

If Congress genuinely wants algorithmic accountability, the path forward requires identifying specific harms and crafting targeted remedies. Transparency requirements could allow researchers to study how recommendation systems affect information exposure and identify discriminatory patterns. Consumer protection frameworks could address dark patterns and addictive design without regulating speech itself.

Conclusion 

Platforms have never and will never get content moderation 100% right. A truly unbiased and viewpoint-neutral social media platform is nice in theory, but impossible to implement in reality. Human moderators always have bias, as do algorithms tuned by human engineers. Algorithms, coded by engineers, will always have to execute choices about what to show users first, what to recommend, and how to organize billions of pieces of content. These are editorial choices, whether made by human moderators or algorithmic systems with human input, and they’re protected by the First Amendment for good reason.

Section 230 doesn’t shield platforms from liability for their own illegal conduct. It doesn’t prevent enforcement of federal criminal law or intellectual property claims. What it does is create breathing room for platforms to moderate content without facing endless litigation over every removal decision, while simultaneously protecting their ability to leave up controversial-but-legal speech without being treated as the publisher of that content.

If Congress wants to address legitimate concerns about platform power, harmful content, or algorithmic amplification, the solution isn’t to dismantle Section 230. It’s to craft interventions that address specific harms without breaking the legal framework that has allowed the internet to flourish as a space for speech, innovation, and connection. This means transparency requirements, consumer protection standards, and, where appropriate, antitrust enforcement.