A Policy Primer for Free Expression and Content Moderation, Part III: Safeguarding Users

In this third post of a four-part blog series, we turn to policy interventions specifically designed to enhance free expression and content moderation on digital platforms while preventing harm to people and communities.

Designing Policy Interventions to Safeguard Users from Harm

With this new four-part series, Public Knowledge unveils a vision for free expression and content moderation in the contemporary media landscape. 

In Part I: Centering Public Interest Values, we provide a brief historical perspective on platform content moderation, review the values that Public Knowledge brings to this topic, and discuss the importance of rooting content moderation approaches and policies in user rights. We also consider a theory that user rights should include the right to hold platforms liable if they don’t enforce the community standards and/or product features they contract for in their terms of service.

In Part II: Empowering User Choice, we discuss the structure of digital platform markets and the necessity of policy choices that create healthy competition and user choice. We also center digital platforms in the broader ecosystem of news and information, and discuss how policy interventions may offset the impact of poor platform content moderation on the information environment by promoting other, diverse sources of credible news.

Here in Part III: Safeguarding Users, we turn to policy interventions specifically designed to enhance free expression and content moderation on digital platforms while preventing harm to people and communities. 

In Part IV: Tackling AI and Executing the Vision, we discuss the implications of the new “elephant in the content moderation room,“ generative artificial intelligence, for free expression and content moderation. We also discuss how our recommended policy interventions can be made durable and sustainable, while fostering entrepreneurship and innovation, through a dedicated digital regulator. 

Readers looking for more information about content moderation can visit our issue page, learn more about the harms associated with algorithmic curation of content, and explore why multiple policy solutions will be required to ensure free expression and effective content moderation. 

To frame the policy interventions in this post: It’s important to note that an enormous amount of the focus on content moderation – and platform accountability more broadly – is in the interest of preventing harm. At Public Knowledge, we categorize the potential harms of algorithmic curation of content into the broad categories of (1) harms to safety and well-being (including privacy, dignity, and autonomy); (2) harms to economic justice (including access and opportunity); and (3) harms to democratic participation (including through misinformation). These harms can arise from obvious issues like cyberbullying and non-consensual intimate imagery (NCII), from targeting of content that reflects and amplifies bias, and from purposeful narratives of disinformation. 

While academics, health professionals, social science researchers, advocates, policymakers, and industry leaders continue to debate the actual causality between platform content moderation and user harm, there is clear and growing momentum pushing platforms to be safer for all users. With this understanding, we propose policy approaches that aim to balance user safety with the preservation of free expression, focusing on product liability, comprehensive privacy protection, and requirements for algorithmic transparency. 

The Product Liability Theory: Revisiting “Big Tech’s Tobacco Moment”

A theory gaining momentum among some policymakers and civil society groups like Public Knowledge is that platforms’ design features – separate and distinct from the nature of the content they serve to users, or how they serve it – can create harms, and that platforms should be liable for the harms their design features cause. Generally, product liability can take the form of claims regarding manufacturing defects, defective design, or failure to provide instructions or warnings about proper use of a product. In the case of platforms, most of the discussion about product liability refers to product design that increases time spent on the service, triggers compulsive use, motivates unhealthy or chronic behaviors, overrides self-control, creates social isolation (all of which can negatively affect self-image and mental health), and introduces unsafe connections to users. This theory holds that, as in other industries, platforms should be accountable – that is, legally liable – for the harms that are caused by the design of their products. (The other industry most often referenced under this theory, by far, is tobacco. Public Knowledge covered “Big Tech’s Tobacco Moment” in 2021.) This goes farther than the consumer protection theory we described in Part I: Under the product liability theory, not only must a platform not be deceptive or unfair, but it must also take affirmative action for its products to be safe, and to warn users if they are not. 

It’s also important to note that policy proposals and lawsuits advanced under this theory, by definition, are not explicitly about free expression and content moderation (even though they are designed to address some of the same harms). Specifically, we do not include algorithmic serving or amplification of user content in our definition of product design under this theory. In fact, we have previously noted “…our continued belief that models that call for direct regulation of algorithmic amplification – whether it’s the speech itself or the algorithms that distribute it – simply wouldn’t work or would lead to bad results.” One way to “test” claims or proposals advanced under this theory is to ask whether the claim or proposal requires knowledge of, or reference to, specific pieces or types of harmful content. If it does, then the claim or proposal really refers to content liability and is likely barred by both Section 230 and the First Amendment. Plaintiffs may seek to work around these prohibitions by characterizing their theory of liability in different terms (like claiming that “recommendations” are manifestations of the platform’s own conduct). However, any theory of liability that depends on the harmful contents of third-party material constitutes treating a provider as a publisher and is barred by Section 230. 

All that said, proposals advanced under the product liability theory as we define it are showing promise as a way to create platform accountability for harms without the constitutional or legal barriers associated with direct regulation of content. In this section we talk about the origins of this theory, its current focus in federal policy, and alternative paths to apply it. 

Origins of the Product Liability Theory

Over the past few years, thanks to researchers, journalists, and whistleblowers, we’ve all become more aware of the externalities of the platforms’ advertising-based business model. That model – which drives the vast majority of the revenue of virtually all the major search, social media, and user-generated entertainment platforms – incents the platforms to design product features that maximize the time and energy people put into searching, scrolling, liking, commenting, and viewing. It’s simple: users’ attention, focused via algorithmic targeting on content that is most likely to be relevant, is those platforms’ only inventory. It’s what they sell (to advertisers). So they design features into their products to create more of it.  

The platforms call this time and energy – this attention – “engagement.” It’s a deliberately upbeat and positive-sounding word that platforms adopted from the traditional ad industry to describe the time users spend scrolling, viewing, liking, sharing, or commenting on other people’s posts. It’s catnip to advertisers, who assume their ads will benefit from it, too. But that same “engagement” – and platforms’ efforts to increase it – has been associated with compulsive use, unhealthy behaviors, social isolation, and unsafe connections. As awareness of the ad-based business model and its potential for harm grew, policymakers brought forward proposals to understand, and then regulate, the role of product design in the harms of social media.  One early proposal Public Knowledge favored, the Nudging Users to Drive Good Experiences on Social Media Act (the “Social Media NUDGE Act”) called for government agencies to study the health impacts of social media; identify research-based, content-agnostic interventions to combat those impacts; and determine how to regulate their adoption by social media platforms. If it had passed, we might be having more evidence-based policy discussions today. 

However, an enormous amount of the subsequent focus in legal and policy circles shifted to how platforms secure more time and attention to sell to advertisers by algorithmically targeting and amplifying provocative content. As renowned tech journalist Kara Swisher puts it, “enragement equals engagement,” meaning the content that elicits the most attention tends to be the most inflammatory. But court case after court case claiming harms from algorithmic distribution of content has been dismissed or lost on one of two grounds. The first is the Section 230 liability shield, which insulates platforms from liability for user content or how it is moderated. The second is the First Amendment, which gives platforms their own expressive rights in content moderation. Similarly, bill after bill in Congress focused specifically on influencing platform policies and practices regarding content moderation have been rejected – as they should be – as contradictory to Section 230 or to the platforms’ own expressive rights under the First Amendment.  

Obviously, some of the ill effects of social media are due to actual third-party content as well as the amplification of this content to users who didn’t ask for it. This includes harassment, hate speech, extremism, disinformation, and calls for real-world violence. But our product liability theory holds that some harms can be caused by product design features that are rooted in the platforms’ ad-based business model and the need to sustain user attention. So rather than pointing to content or content curation, the product liability theory implicates product liability law, holding platforms liable for the harm they cause as the designer, manufacturer, marketer, or seller of a harmful product, not as the publisher or speaker of information. 

Current Focus of the Product Liability Theory

In Congress and the courts, the current focus of the product liability theory is the well-being of children and adolescents, whose attention generates an estimated $11 billion a year in digital ad revenue in the U.S. Some of this focus is rooted in research and whistleblower revelations about the impact of social media usage on kids – and what the platforms know about it. For example, early in 2023, the U.S. Surgeon General issued an advisory noting that “social media can… pose a risk of harm to the mental health and well-being of children and adolescents,” since adolescence is a particularly vulnerable period of brain development, social pressure, and peer comparison. More recently, the Surgeon General has called for warnings akin to those on cigarette packages, designed to increase awareness of the risks of social media use for teens. A best-selling book puts forward a case that a “phone-based childhood,” combined with a decrease in independent play, has contributed to an epidemic of teen mental illness. The question of social media’s impact on kids gained more steam recently from another round of revelations about “what Facebook knew and when they knew it… and what they didn’t do about it” in regard to child safety. Policy proposals rooted in the product liability theory enjoy support from, and have sometimes been shaped by, youth advocacy organizations, including DesignItForUs and GoodForMEdia

One major challenge to all of this: While the crisis in youth mental health is very real, research on the causality of social media is mixed at best. That said, products and services may cause harm even if they don’t create a health crisis. For example, a literature review from the National Academies of Science, Engineering, and Medicine recently concluded that social media may not cause changes in adolescent health at the population level, but may still “encourage harmful comparisons; take the place of sleep, exercise, studying, or social activities; disturb adolescents’ ability to sustain attention and suppress distraction during a particularly vulnerable biological stage; and can lead, in some cases, to dysfunctional behavior.”

Policymakers also focus the product liability theory on kids because of the higher likelihood of bipartisan agreement, after years of failures to regulate Big Tech. Hauling Big Tech CEOs into hearings and demanding apologies for the genuinely heartbreaking losses families have experienced as their children confronted mental health crises or physical harm makes for viral moments. The risk is legislation propelled by “moral panic and for-the-children rhetoric” rather than sound evidence of efficacy relative to youth mental health (which most experts agree requires a more multidimensional approach). 

In our view, the product liability theory may in fact be an effective way to mitigate some of the harms associated with social media while circumventing both constitutional challenges and the intermediary liability protections provided by Section 230. But it should apply to all users of social media – not just kids and teens. 

There are three ways to apply the product liability theory to mitigate harms from product design: through litigation, through reform of Section 230, and through new legislation. 

Litigation

In the past, courts have not generally distinguished between the role of product design features and the role of content distributed by algorithms in creating harm. The platform defendants didn’t help: They generally argued they were exempt from liability due to their own expressive rights or the broad protection provided by the liability shield of Section 230. As a result, algorithms and an expanding array of other platform product features have been determined by judges to be the platforms’ own protected speech and/or shielded by Section 230. Most cases have been dismissed quickly on the premise that Section 230 bars claims based on alleged design defects if the plaintiffs seek to impose a duty to monitor, alter, or prevent the publication of third-party content. 

However, a set of more current cases makes finer distinctions between product design features and the algorithmic curation of user content – though they don’t always agree where the line is. In 2021, in Lemmon v. Snap, judges determined that it was one of Snapchat’s own product features – a speed filter – and not user content that had created harm. Since then, almost 200 cases have been filed alleging product defects or similar claims, and some are making it past moves for dismissal on the basis of Section 230 and/or the platforms’ own expressive rights. For example, in what is now a multidistrict product liability litigation against Facebook, Instagram, Snap, TikTok, and YouTube, a judge determined that some product design choices of the platform (like ineffective parental controls, ineffective parental notifications, barriers that make it more difficult for users to delete and/or deactivate their accounts than to create them, and filters that allow users to manipulate their appearance) neither represent protected expressive speech by the platforms (so the First Amendment does not protect them), nor are they “equivalent to speaking or publishing” (so they are not shielded by Section 230). A district court judge in Utah found that Section 230 does not preempt a state law’s prohibitions on the use of autoplay, seamless pagination, and notifications on minors’ accounts. More recently, courts have split on whether TikTok’s recommendation algorithm is the platform’s own “expressive activity” or whether it is liable in the tragic death of a 10-year-old girl who participated in the “blackout challenge” found on the platform. (Public Knowledge joined an amicus brief in this case, arguing that platforms may have both Section 230 immunity and First Amendment protections for their editorial decisions, including algorithmic recommendations.) The Superior Court of the District of Columbia, in its civil division, denied Meta’s motion to dismiss a case claiming that personalization algorithms that leverage a variable reward schedule, alerts, infinite scroll, ephemeral content, and reels in an infinite stream foster compulsive and obsessive use because “the claims in the case are not based on any particular third-party content.” For this reason, the court “respectfully decline[d] to follow the decision of the judge in the multidistrict litigation.” In October of 2024, the Attorney General of New Mexico released new details of the state’s lawsuit against Snapchat, which claims the company fails to implement verifiable age-verification, designs features that connect minors with adults, and fails to warn users of the risks of its platform. 

The outcome of all these ongoing cases is obviously dependent on many variables, but they may signal it is possible to distinguish design features from content or algorithmic curation and whether a litigation path for the product liability theory is viable. 

Section 230 Reform 

An alternative way to apply the product liability theory would be targeted reform of Section 230 designed to clarify which aspects of a platform’s own conduct or product design lie outside the protections of the intermediary liability shield. In our view, conduct apart from hosting and moderating content is already outside the scope of Section 230. But while past court cases (like Homeaway and Roommates) have demonstrated this for specific activities and fact patterns, the sheer number of current court cases (and the sometimes-conflicting decisions arising from them) show it may be difficult to draw the line. Such reform would not create liability for any elements of product design, but it would allow the case to be made in court. 

Public Knowledge has proposed Section 230 principles to protect free expression online. Targeted reform of 230 to advance the product liability theory could be accomplished while adhering to those principles. One principle states that Section 230 already does not shield business activities from sensible business regulation. Another principle is that Section 230 was designed to protect user speech, not advertising-based business models (which most of these product design features are meant to advance). A third principle states that Section 230 reform should focus on the platform’s own conduct, not user content. (As a result of these principles, Public Knowledge also proposes that users should be able to hold platforms accountable for taking money to run deceptive or harmful ads because paid ads represent the business relationship between the platform and an advertiser, not users’ free expression.) As noted, great care would have to be taken to distinguish between the platforms’ business conduct and its own expressive speech (as well as speech of users), but if done well, this would be a content-neutral approach to Section 230 reform that would withstand First Amendment scrutiny.

New Legislation

As noted, most of the focus for the product liability theory by policymakers has been specifically about the safety of kids and teens. The federal proposal that has gained the most bicameral, bipartisan traction under this theory is the Kids Online Safety Act, or KOSA, and it personifies both the mechanisms and risks that accompany child-focused legislation. The bill requires that platforms exercise a “duty of care” when creating or implementing any product design feature that might exacerbate harms like anxiety, depression, eating disorders, substance use disorders, and suicidal behaviors. Platforms must limit design features that increase the amount of time that minors spend on a platform. And the bill requires platforms to make the highest privacy and safety settings the default setting for minors, while allowing them to limit or opt out of features like personalized recommendations.

Criticism of KOSA – and bills similar to it at both the federal and state level – largely centers on the belief that any requirement that platforms treat minors differently from adult users will inevitably lead to age-gating. Age-gating refers to the use of electronic protection measures to either verify or estimate users’ ages in order to restrict access to applications, content, or features to those of a legal (or deemed-appropriate) age. Both age verification and age estimation have hazards, including technical limitations and the risk of bias and privacy risks. Age verification requirements have also been deemed unconstitutional at the state level as they impinge on all users’ rights to access information and remain anonymous. 

There are also concerns that any duty of care applied to content platforms – no matter how specifically or narrowly defined – will inevitably lead to content restrictions, particularly for marginalized groups, as interested parties deem content they find objectionable to be “unsafe.” Court precedent so far would disallow such demands by plaintiffs under the First Amendment and the protections of Section 230, though that would not prevent platforms from removing certain categories of content themselves in order to avoid the legal risk. These concerns arise in part because the overall concept of a duty of care can be amorphous. Common law duties including a duty of care in other contexts have evolved over centuries, may require expert testimony to verify, and are subject to different interpretations by juries drawn from communities with differing values. 

The product liability theory, which we support, has some similarities to frameworks calling for “safety by design.” Combined with a national privacy standard, which we discuss below, such legislation would help users avoid the harms associated with certain product features without impacting users’ or platforms’ expressive rights. But this is another case where Public Knowledge would prefer to see protections for all users, not just kids and teens. Rather than run headlong into the buzzsaw of opposition to age-gating, policymakers could articulate content-neutral regulations that govern product features related to the platforms’ advertising-driven business model. This would also remove the ambiguity about what material is “suitable” or “safe” for minors based on its subject matter or point of view. Such regulations may prohibit certain product design features (for example, dark patterns meant to manipulate user choices). They may include requirements for enhanced user control over their experience (for example, requiring that safety and privacy settings are at their highest possible setting by default). And/or, they may require a more focused “duty of care,” or duty to exercise reasonable care in the creation and implementation of any product feature designed to encourage or increase the frequency, time spent, or activity of users. The regulations may also require platforms to study the impact of their product design, make data available to researchers for such studies, and make any findings available for audits or transparency reports. 

Limiting Data Collection and Exploitation Through Privacy Law

Remember the 2018 Cambridge Analytica scandal? As a reminder, the British political consulting firm acquired personal data from millions of users for targeted political advertisements in 2016. A personality quiz application on Facebook, created by a psychology professor and funded by Cambridge Analytica, was used to collect user data and data from users’ friends without their consent in order to run targeted political digital advertising campaigns. Although only 270,000 users consented to have their data harvested, Cambridge Analytica obtained data from around 30 million users connected to those initial participants. While its actual impact on the Brexit vote has been shown to be minimal, the scandal created wide awareness of platforms’ data practices.

Technically, the personality quiz app’s transfer of user data to Cambridge Analytica violated Facebook’s terms of service. However, from a legal standpoint, the acquisition, sale, and sharing of personal data by platforms or data brokers without individuals’ knowledge is often permissible. After all, there is no single, comprehensive federal privacy law that governs the handling of personal data in the United States, so data collected online or through digital products has little regulatory oversight. The concentration of dominant platforms means a handful of giants control vast amounts of data, which may be used for privacy-invasive activity, like behaviorally targeted advertising and profiling of users for targeting of content. This can compound harms, especially to marginalized communities that are often the target of hate speech and harassment. 

At Public Knowledge, we advocate for protecting consumer privacy through requirements for data minimization, informed consent, and effective user controls. We advocate for a thorough federal privacy law that provides a foundation for states to build upon and includes a private right of action, enabling consumers to take legal action when necessary.

State of Play in Privacy Law

Companies have, time and time again, been exposed for sharing sensitive personal data without user consent. The Federal Trade Commission plays consumer protection Whac-A-Mole by slapping fines on privacy-violating companies – like the $7.9 million fine on Betterhelp, the online therapy company, which sold customers’ health data to Facebook and Snapchat. Regulatory enforcement can punish bad actors, but does nothing to mitigate the privacy-invasive behavior – or the harms it may cause – in the first place. That’s where comprehensive national policymaking comes in. 

The U.S. has tried – in vain – to pass a federal privacy bill. Since 2021, Public Knowledge has supported – with some caveats – the Online Privacy Act and the American Data Privacy and Protection Act (ADPPA). The latter bill aimed to prevent discriminatory use of personal data, to require algorithmic bias testing, and to carefully restrict the preemption of state privacy laws, among other benefits. Most recently, in 2024, the American Privacy Rights Act (APRA) succeeded ADPPA, but Public Knowledge – and some Democratic lawmakers – came to oppose it due to the removal of key civil rights protections. 

While we have expressed support for a variety of privacy-related bills, we believe that truly effective privacy protections require addressing the entire online data ecosystem, not just targeted measures. One-off actions can have minimal real-world impact, especially if aimed at a specific company or practice (looking at you, Tiktok ban). Worse, they may reduce avenues for free expression online while allowing Congress to neglect the need for comprehensive privacy protections across all communities. 

The Misguided Focus On Kids’ Privacy

While any attempt at comprehensive privacy legislation withers in Congress, more focused battles on child privacy pervade, for the same reasons we noted in regard to the product liability theory. The great impasse in the child privacy debate is that – you guessed it – bills tend to mandate data minimization while also proposing age verification mechanisms that would require additional collection of personal data. These proposals have taken various forms, including Section 230 carve-outs, which would make platforms liable for child privacy-invasive behavior. 

The Children’s Online Privacy Protection Act (COPPA), enacted nearly 25 years ago, is the original law safeguarding child users (in this case, those under 13) from websites collecting personal information without consent. COPPA re-emerged in the last couple of years as policymakers sought to update the framework to better reflect the evolution of social media. Known as COPPA 2.0, the revised bill increases the covered age to 17 and requires platforms to comply using an implied knowledge standard that a particular user is a minor. Public Knowledge supported the new COPPA framework, but not without critiques. The biggest was that – no surprise – we believe that any privacy law should be applicable to all users, not just kids. 

There is also a slew of bills that specifically target the awful proliferation of child sexual abuse material (CSAM) online. Unfortunately, most of these proposed laws also miss the mark. Notably, the Eliminating Abusive and Rampant Neglect of Interactive Technologies (EARN IT) Act, floating around Congress since 2020, repeals Section 230 for platforms that do not act sufficiently on CSAM, exposing platforms to criminal and civil liability for its distribution and presentation. We’ve steadfastly opposed EARN IT, not only because repealing Section 230 would have such detrimental effects on free expression, but also because the bill will eliminate or discourage encryption services and force platforms to expand broad content moderation, which disproportionately impacts marginalized communities. We think that users, such as journalists messaging sensitive sources, have the right to communicate free from surveillance from third parties by leveraging end-to-end encrypted messaging. 

Similarly, the Strengthening Transparency and Obligation to Protect Children Suffering from Abuse and Mistreatment (STOP CSAM) Act falls short, compromising privacy by discouraging end-to-end encryption. One very important note relevant to both EARN IT and STOP CSAM: Section 230 already has an exception for federal criminal activity, which includes the distribution of CSAM. If we want to curb child exploitation, increased surveillance of everyone is not the solution. Enforcing existing laws and putting resources towards victim identification and assistance would be a more productive, rights-preserving approach. 

Requiring Algorithmic Transparency 

The lifeblood of a digital platform is not the user, the interface, or the posts – it is the complex math equations used to organize content on your feed or in your search results, called algorithms. Digital platforms utilize machine-learning algorithms to tailor content feeds, aiming to maximize user engagement – and as we’ve noted, platform profits. These algorithms analyze personal data such as viewing habits, geographic location, platform history, and social connections to prioritize content users will likely engage with. Algorithms are tools created by humans to perform specific functions. They not only arrange and rank content in feeds but also enforce platform content guidelines by identifying and removing inappropriate content in an automated way (ideally in conjunction with human moderators who can understand and apply cultural and context cues). Yet, while algorithms can enhance personalized user experiences, they can also amplify harmful or discriminatory content. 

Early social media platforms displayed content reverse-chronologically, but this approach quickly became inadequate as information volume and investor demands for monetization grew. For example, recognizing users’ struggle to navigate the flood of content, Facebook introduced EdgeRank in 2007. It was one of the first sophisticated social media algorithms, and it drove both user engagement and profit optimization for Facebook’s also-new ad-based business model. The EdgeRank algorithm prioritized content based on three key factors: the frequency of user interactions with friends; the types of content a user typically engaged with; and the recency of posts. This system aimed to present users with a more personalized and engaging feed, effectively filtering out less relevant content and highlighting posts deemed more likely to interest each individual user. Facebook has since fine-tuned its algorithm, now integrating tens of thousands of variables that better predict what users would like to see on their feeds and what will keep their eyes on the platform for as long as possible. Today, each and every social media platform utilizes its own proprietary algorithm to attract and keep users glued to their feeds. 

Algorithmic ranking of content, particularly in combination with design features such as endless scroll and recommendations, can exacerbate harm in several ways. It can expose users to increasingly extreme content, send them down subject matter rabbit holes, and narrow the range of views and voices they see. Effective content moderation requires an understanding of context and cultural nuances, whereas algorithms typically rely on specific terms or hashes, which may not capture the full meaning or intent behind the content. As we’ve described, algorithmically mediated enhancement of exposure and engagement with divisive, extreme, or disturbing content can have real-world impacts in terms of public civility, health, and safety. Algorithmic ranking can give rocket fuel to toxic online user behaviors like targeted cyberbullying, verbal abuse, stalking, humiliation, threats, harassment, doxing, and nonconsensual distribution of intimate images.  

Adding to this complexity, companies frequently adjust moderation policies and practices in response to current events or political pressures. Moderation algorithms can also reflect the cultural biases of those who coded them: predominantly male, libertarian, Caucasian or Asian coders in Silicon Valley. While we can observe the effects of these algorithms, their inner workings remain largely obscure, often referred to as “black boxes.” Nevertheless, malicious actors can exploit these algorithmic vulnerabilities to optimize harmful content without needing to understand the underlying code by playing on current events, political pressures, or predictable biases. The opacity, potential for biased data and feedback loops, lack of oversight, and scale of algorithms compound their impact compared to human moderation. These harms disproportionately affect historically marginalized groups, as platforms sometimes disregard or suppress research indicating discriminatory content moderation practices, leaving allegations of racism, sexism, and homophobia from users largely unaddressed.

Since the revelations of whistleblowers like Frances Haugen of Facebook in 2021, which showed that platforms knowingly amplify harmful content, policymakers have been interested in holding platforms accountable for algorithm-related harms. Various algorithmic transparency bills have been proposed in Congress, aiming to shed light on the mechanisms driving social media algorithms, potentially enabling researchers and regulatory bodies to monitor and, when necessary, intervene in their operation. 

As part of Public Knowledge’s broader advocacy for free expression and content moderation, we recognize that algorithms are crucial to platform operations but are currently too opaque. Rather than advocating for restrictions on algorithm use, Public Knowledge supports legislation that mandates transparency in algorithms and ensures users have a clear understanding of how platform content moderation decisions are made. Some notable examples are the decisions by X to downrank posts with links (to keep people on the platform) and by Meta to downrank news by default (ostensibly to respond to users wanting less “political” content in their feeds). Such decisions warrant more transparency and choice for users given their impact on the availability of news and information. 

In 2023, two court cases raised the question of whether social media companies are liable for contributing to harm to users under the Anti-Terrorism Act (ATA) by hosting and/or algorithmically promoting terrorist content.

In Twitter v. Taamneh, the Supreme Court considered whether a platform that hosted ISIS-related content could be liable under the ATA, which prohibits “knowingly providing substantial assistance” to designated terrorist groups, and whether Section 230 should shield it from liability. But the court found that a social media company that merely hosted such content (because it was open to anyone to create accounts and post material) did not meet the ATA’s knowledge threshold. Because under the facts of the case, Twitter could not have been found liable, the Court did not need to decide whether Section 230 would have shielded Twitter from liability under the Act.  

In Gonzalez v. Google, plaintiffs similarly argued that Google should be liable for algorithmically promoting terrorist-related content on its YouTube platform. The Biden administration filed a brief in this case, arguing that Section 230 did not shield platforms from liability for algorithmic content recommendations.  (Public Knowledge filed a brief disagreeing with this claim.) However, given the result in the Taamneh case, Google could not have been found liable, whether or not Section 230 applied.  The Court therefore did not issue a decision clarifying the scope of Section 230.  

The Court may not be able to avoid ruling on Section 230 in future cases, but both Taamneh and Gonzalez demonstrate that, even without Section 230, holding a platform liable for harms stemming from content they host or recommend is difficult. Specific legal claims such as those under the ATA generally have high thresholds of culpability, such as requiring a platform to deliberately promote harmful material, instead of such material being swept up by a general-purpose algorithm. Further, the First Amendment largely protects platforms (and their users) from liability even for promoting false, or even dangerous material, without a showing of knowledge and culpable conduct, or the presence of a specific duty of care (such as that of doctors to their patients). While Section 230 does shield platforms from liability in some cases, it largely cuts short litigation that has little chance of success to begin with. 

The Right Policy Framework Can Make Algorithms Both Helpful and Healthy

Proposed policy frameworks to regulate algorithmic decision-making range from banning the use of algorithms entirely, to holding platforms liable for algorithmically amplified content, to requiring transparency, choice, and due process in algorithmic content moderation. Banning algorithms outright is a narrow and impractical solution, given their dual role in promoting content and enforcing platform guidelines. As we’ve noted, Section 230 and the First Amendment preclude blanket liability for algorithmic curation and broad liability would result in platform over-moderation fuelled by risk aversion. Instead, Public Knowledge believes the best solution is to require transparency into platforms’ algorithmic design and outcomes as a component of better and more evidence-based regulation and informed consumer choice. It also provides the means to create accountability for platforms’ enforcement of their own policies, as we recommended earlier. It is the role of Congress to pass legislation that empowers users and to address aspects of social media platforms’ business models that can perpetuate harm. 

For example, Public Knowledge supports the bipartisan Internet Platform Accountability and Consumer Transparency Act (Internet PACT Act), which would require social media companies to publish their content rules, provide transparency reports, and implement a user complaints mechanism. This approach ensures platforms adhere to their own rules while providing users with clear guidelines and appeal processes. It aims to enhance transparency, predictability, and accountability, with enforcement by the FTC.

Another bill we support is the Platform Accountability and Transparency Act (PATA), reintroduced in 2023. It aims to improve platform transparency and oversight by creating a National Science Foundation program for researcher access to data, setting FTC privacy and security protocols, and mandating public disclosure of advertising and viral content. The updated version of the bill no longer seeks to revoke Section 230 protections from non-compliant platforms, a change welcomed by Public Knowledge.

Despite bipartisan support, neither of these bills has advanced to a vote.

Learn more about tackling AI and executing the vision for content moderation in Part IV.