Plaintiffs in a lawsuit arising from a 2022 mass shooting in Buffalo, New York are using a new legal strategy to hold social media platforms accountable for causing real-world harm. Their strategy portrays algorithmic recommendation systems as defective products rather than the platform’s protected speech. Everytown Law, a nonprofit focused on gun safety, is suing a broad range of tech companies, including Meta and Discord, claiming these platforms radicalized shooter Payton Gendron through their algorithmic recommendation systems and addictive features that amplified white supremacist content. According to the shooter’s own manifesto and a report from the New York Attorney General, Gendron was immersed in extremist memes and conspiracy theories like the “great replacement” and drawn into a community that glorified previous mass shooters.
Rather than arguing that platforms are liable for third-party content (which would typically be blocked by Section 230 of the Communications Decency Act), the plaintiffs are framing the platforms as products – and defective ones at that. They argue:
- The platforms’ algorithms and engagement-driven design are inherently unsafe, fostering addiction and amplifying harmful content.
- These algorithms operate not merely as conduits for speech but as first-party actions, akin to manufacturers selling dangerous physical products.
- Because many platforms hold patents on their algorithms, the plaintiffs claim this reinforces their status as product manufacturers, not publishers.
The 2022 Buffalo mass shooting was undeniably horrific. We recognize the profound pain caused by online ecosystems that amplify extremism and foster harm, and we believe that platforms must be held accountable when their design choices actively contribute to those outcomes. But we must draw a line between holding platforms accountable for product design and seeking to punish them for third-party speech. And we caution against legal theories that seek to repackage content-based harms as product liability claims. When plaintiffs argue that platforms should be held liable because they recommended or amplified hateful third-party content, they are effectively asking courts to impose publisher liability on these platforms. That is exactly the kind of claim that Section 230 of the Communications Decency Act was designed to block, and for good reason. The statute ensures platforms are not incentivized to remove all potentially controversial content to insulate themselves from liability. Weakening Section 230 would chill legitimate online speech, especially for marginalized communities who would otherwise be subject to overmoderation, and jeopardize the open nature of the internet generally.
Why Not Make Platforms Accountable for the Algorithms?
The plaintiffs’ legal strategy for platform accountability regarding algorithmic recommendations has been influenced by a few key cases, including Force v. Facebook and Gonzalez v. Google, both of which involved claims that platforms’ algorithmic systems helped promote terrorist content, which, in turn, radicalized users into committing acts of violence. In both instances, federal courts pointed to Section 230’s liability shield, viewing algorithmic recommendations as traditional “publisher activities” akin to arranging and distributing third-party content.
Yet dissenting opinions from Chief Judge Katzmann in Force and Judge Berzon in Gonzalez questioned whether algorithmic recommendations should receive such broad immunity. Chief Judge Katzmann argued that Facebook wasn’t merely publishing content but “proactively creating networks of people,” and that its algorithms communicate a specific message to users about content they might like. Katzmann contended that “when a plaintiff brings a claim that is based not on the content of the information shown but rather on the connections Facebook’s algorithms make between individuals, the [Communications Decency Act] does not and should not bar relief.” Further, Judge Berzon argued that algorithmic recommendations are “well outside the scope of traditional publication.” To be clear, the majority opinion in both cases asserts that algorithms are an extension of publishing, and publishing is inherently expressive, which is protected under the First Amendment. However, the framework proposed by the dissenting judges would distinguish between traditional content-based claims (which would remain protected) and connection-based claims that focus on how algorithms connect users, potentially creating a new avenue for platform accountability. While the Supreme Court vacated the Gonzalez decision, it notably did not address the Section 230 analysis, leaving these fundamental questions about the scope of algorithmic immunity unresolved.
Public Knowledge argues that algorithms themselves should not be the target of regulation because they are simply tools – code that can be used to enhance or harm, depending on design and intent. Social media platforms should instead be regulated through comprehensive policy solutions that address the underlying business practices and harms rather than targeting algorithms themselves as inherently problematic.
Are There Product Liability Pathways That Don’t Involve Algorithms?
To successfully litigate product liability claims related to platform design features, plaintiffs must allege a harm that is not an indirect way of imposing liability for third-party content. Historically, courts haven’t distinguished between design features and algorithmically distributed content when assessing harm, with platforms claiming (successfully) both First Amendment protections and Section 230 immunity. But the 2021 Lemmon v. Snap, Inc. case, where judges determined that Snapchat’s speed filter feature encouraged the reckless driving that led to the deaths of three men in a high-speed car accident, opened a potential pathway for holding platforms liable for harmful product design choices. This is a theory of liability that is distinct from holding a platform liable on the basis of the content it hosts and promotes. The court in Lemmon held that Snap’s creation of the speed filter itself, not any specific video or recommendation, predictably prompted users to drive at dangerous speeds. This made the product inherently dangerous, without regard to any specific content or recommendation.
Although Lemmon made it clear that it was the act of designing a product that encouraged users to drive recklessly that triggered possible liability, not any content created with the speed filter or recommended by Snap, this precedent has since paved the way for more targeted legal challenges. Specifically, plaintiffs have attempted to circumvent the immunity for the content shown to users by focusing on how platforms recommend content, arguing that these recommendations may deliberately foster harmful user behaviors, including compulsive use or unsafe connections.
Promoting Compulsive Use and Addictive Behaviors
The most significant legal challenges against social media platforms center on the harm they cause to children. A pivotal case is Anderson v. TikTok, where the platform may face responsibility for the “Blackout Challenge” that led to 10-year-old Nylah Anderson’s death because of the algorithmic amplification of acts of self-asphyxiation. While TikTok initially won dismissal in 2022 thanks to Section 230 protection, the Third U.S. Circuit Court of Appeals later reversed this ruling, determining that Section 230 doesn’t shield social media algorithms, as those algorithms, being part of the platform’s own expressive activity, are not considered third-party speech. This case is likely to head to the Supreme Court due to conflicting circuit interpretations. Public Knowledge joined an amicus brief arguing that Section 230 immunity applies to TikTok’s recommendation algorithms: Section 230 prevents platforms from being held liable as publishers, and “publishing” is an expressive activity, in and of itself. The Third Circuit’s decision is just a new version of the long-discredited idea that Section 230 only protects platforms to the extent that they are somehow “neutral.” While Section 230 does not protect a platform’s own speech, it does permit a platform to decide what content it hosts, what it removes, and how it promotes and arranges that content. Going further, content moderation is considered expressive activity that the First Amendment protects.
There are features beyond algorithmic curation, though, that may both exacerbate addictive behaviors, and fall outside of Section 230 protections. Notably, the Superior Court of the District of Columbia denied Meta’s motion to dismiss a case that claims the platform knowingly caused harm to children with features like “infinite scroll,” declaring that such features are excluded from Section 230 protection. This aligns with a broader California multidistrict litigation against major social media companies (Facebook, Instagram, Snap, TikTok, and YouTube), where plaintiffs argue that these platforms induce excessive usage sessions through features like push notifications, auto-play, and psychological reward systems. In this case, the court found harms from such design features plausible enough to survive dismissal, as plaintiffs argue that they are not creating liability for any specific content shown on platforms (which may be protected by Section 230), but instead are attacking the design architecture and behavior-shaping mechanisms created by the platform.
Facilitating Unsafe Connections
The beauty of social media platforms is their ability to connect like-minded users from anywhere in the world. The tragedy of social media is that those connections may not always be with well-meaning individuals. Time and again, a platform like Instagram, Roblox, or Snap will come under fire for enabling bad actors to exploit children, and shortly after that, the company will announce new child-safe features and parental controls. Yet the case remains that certain elements of a platform’s design facilitate personal connections with problematic users, including public profiles as default, the ability to “Quick Add” strangers, and ephemeral (disappearing) messaging.
These features are not accidental. The business model underlying platform design provides an incentive to maximize user engagement and time spent on the platform, including by encouraging interaction with new users. Unfortunately, this same design logic has led to devastating consequences – such as children encountering drug dealers and purchasing fentanyl-laced pills, with fatal outcomes. That’s the crux of Neville v. Snap, a case in which plaintiffs establish an “attenuated causal chain purportedly linking their relatives’ injuries to various Snapchat features that enabled their relatives’ communications with drug dealers, rather than to those communications themselves,” in efforts to plead around Section 230. But to make a successful product liability claim that is not blocked by Section 230 analysis, plaintiffs must be able to show that harm does not relate to content published by the platform, or to a publisher’s traditional functions of selecting, promoting, and arranging content created by others.
The challenge in the Neville v. Snap case, along with similar cases that seek to hold platforms accountable for connecting users with malicious actors, is distinguishing between engaging in an unsafe connection due to viewing content and simply connecting with a random user. In Neville, plaintiffs cite features like Snap Map (which surfaces content from nearby, unconnected users) and Stories (user-generated posts that expire in 24 hours) to argue that Snap’s design exposed children to drug-related content. However, the crux of the claim still hinges on users viewing third-party content. Because these features are fundamentally content display tools, courts are likely to find them protected under Section 230, which shields platforms from liability for publishing content created by others, even if that content is harmful or illegal (unless that content facilitates sex trafficking or contains child sexual abuse material).
Whether liability actually exists, or should exist, in any given case will be a very fact-specific inquiry. Many of the same features that might be decried as “dangerous” in one context might be a lifeline for people looking for support in another. Either way, these aspects of platform design are far afield from the “publishing” activities that Section 230 shields from liability. Broadband ISPs and telephone companies are not liable as “publishers” for the content they transmit, not because of Section 230, but because, unlike social media platforms, they are not “publishers” to begin with. Remember, Section 230 does not say that platforms are not publishers – most of them clearly are. It says they cannot be held liable as publishers, for things like defamation. Another reason a service would escape liability for publishing is by simply not “publishing” something at all. The same reasoning could apply to platform features analogous to those services. Again, generally speaking, services that allow people to communicate with each other are usually not liable for harms stemming from what people use them to say. But this is not because of a statutory shield from liability, but due to the specific facts of how they operate. Section 230 should not prevent courts from making the same investigation into modern platforms.
Bringing a Product Safety Lens to Platform Accountability
The main takeaway here is that holding platforms accountable for the negative externalities they cause requires discernment among a myriad set of platform design features and functions. Most notably, we believe it would be inappropriate for the court to rule that recommendation algorithms lack protection under Section 230. A claim that a platform is responsible for “algorithmically promoting” harmful content is necessarily tied to the content itself – so it triggers Section 230. (No one would argue that there should be liability for the algorithmic promotion of harmless fluff.) On the other hand, features that encourage users to behave in ways they otherwise wouldn’t regardless of the nature of the content – like gamification and frequent notifications – may trigger potential for liability.
We believe there are policy solutions that, when combined, can enhance platform accountability. This includes targeted Section 230 reform that preserves core protections for user speech while clarifying which types of platform conduct fall outside immunity.
More specifically, policymakers should look at removing Section 230 immunity for paid advertising content. The reform would only apply to actual advertisements, not content appearing next to ads, and would cover just the ads themselves rather than the tools used to serve them. This liability would extend to all parties in the ad ecosystem, including platforms and intermediaries, although it wouldn’t be strict liability; each party’s knowledge and involvement would need to be evaluated separately. The rationale is that platforms have direct business relationships with advertisers and the ability to pre-screen ads, unlike with user content. Many online ads currently perpetuate harms such as scams, discrimination, and fraud, yet the current system allows platforms to profit from these ads without consequence.
Congress could also clarify which social media design features (that don’t implicate content) could be open for litigation, particularly those that manipulate users into behaviors they wouldn’t otherwise engage in, such as compulsively scrolling or spending excessive time on platforms. Other features could include obscured or difficult-to-access privacy settings, misleading notifications, or features that give the illusion of choice but do not actually amount to much (i.e. requesting a platform to limit certain ads). Platforms should also be required to study and disclose the impacts of their design features on user behavior. This could include submitting to regular audits, publishing transparency reports, and sharing data with independent researchers to understand the full scope of platform harms.
For readers interested in diving deeper into these complex issues, we encourage you to check out our comprehensive Policy Primer for Free Expression and Content Moderation, Part III: Safeguarding Users. There, we explore the full range of policy tools available to address platform harms while protecting the Open Internet that has enabled so much beneficial speech and innovation.