Where the Rubber Meets the Road: Section 230 and Civil Rights
Where the Rubber Meets the Road: Section 230 and Civil Rights
Where the Rubber Meets the Road: Section 230 and Civil Rights

    Get Involved Today

    Over the past couple of months, millions of people across the U.S. have protested the inequitable treatment that Black, Indigenous, People of Color (BIPOC), particularly Black people, have faced throughout our nation’s history. Now, unlike previous racial justice movements, so much of how we connect, work, live, or engage is online. Digital platforms and technology play a role in every aspect of our society, including protecting civil rights. Tech policy, as integral to our everyday life as the tech we use, is crucial to our understanding of what social justice means in this digital era. Section 230 of the Communications Decency Act is a part of that conversation as digital platforms, members of Congress, and advocates contend with the role of digital platforms and civil rights more broadly.

    Section 230 is an important law for preserving the power for platforms to moderate content and promote safe and spaces for interacting online. Without it, platforms may face severe liability risk that may lead to a chilling effect on moderation that could lead to a cesspool of content including harmful disinformation and constitutionally protected hate speech. Unfortunately, Section 230 has also been used in court as a tool by platforms to avoid meaningfully confronting the role their products can play in furthering racial inequality; tech companies may hide behind this law to avoid discussing how their platforms affect civil rights issues, including housing, employment, and lending. In this moment of racial reckoning, the benefits of Section 230 must work in concert with  platform accountability — not just for content moderation but also for civil rights. Advocates, consumers, and members of Congress want to know that these dominant digital platforms embrace equality and non-discrimination based on one’s protected class status. The problem is that we don’t know whether dominant platforms, or digital platforms in general, are compliant with civil rights law. These inquisitions, regulatory or legal, have been stymied by digital platforms claiming that Section 230 immunizes them from civil rights suits, as many platforms claim that the discriminatory conduct has been done by third parties and not the platforms themselves.

    We have a right to know what standards, particularly in the civil rights space, guide the digital platforms that influence so much of our daily lives. As more of our commerce moves online, advocates and legislators need to consider the role Section 230 can play as an impediment to effective civil rights enforcement and regulation. There are certain kinds of suits that are impeded by Section 230 that Congress and regulators need to address, as civil suits are crucial to the expansion of civil rights law. Exempting civil rights law from Section 230 protection may not be the right approach. However, amending Section 230 to remove protections for advertising may actually address the concerns that civil rights advocates have had with regard to discrimination in advertisements on platforms. While exempting advertisements would not address all of the civil rights concerns that state attorney generals for example recently highlighted in a letter to Facebook, this approach does incorporate the role that advertisements play within the digital marketplace and represents a way to hold platforms accountable for their discriminatory conduct while also protecting free expression.

    Platforms Have a Civil Rights Problem

    In 2016, ProPublica produced a report that showed how advertisers on Facebook could discriminate based on protected categories (such as race, gender, religion, language, and country of origin) in housing advertisements. This was a revelation to many within the civil rights community, and two years later, Facebook started its own civil rights audit at the behest of members of Congress and civil rights organizations. Although Facebook has taken the majority of public ire for alleged civil rights violations, there is broader evidence to suggest that many of the advertising structures that underlie digital platforms may have broader discriminatory impacts. For example, using lookalike audiences — the practice of using a set or sets of user data points to find similar users — can reproduce historic bias in areas such as housing or employment. This is not just a Facebook problem, as Google, Twitter, and even LinkedIn have similar tools that enable advertisers to target their audiences.

    While the Facebook audit is a necessary step in the right direction, it took a massive public outcry and pressure from members of Congress for it to engage in audits of the potential discriminatory impact of its platforms. Audits of this kind should be a standard practice. Despite the audit, neither Facebook nor any other dominant platform provides meaningful transparency about its potential civil rights implications — even four years later. Section 230 makes the issue of transparency even more difficult by requiring litigants to prove that the platform co-created the content in question in order to advance the lawsuit — a proposition incredibly difficult to prove, especially as the line between co-content creator and editor continues to blur. 

    For example, could a hiring platform that shows job advertisements to prospective candidates be sued for discrimination in hiring? A potential plaintiff may argue that if a hiring platform, like ZipRecruiter for example, treats the resumes of people who have enumerated protections under Title VII of the Civil Rights Act of 1964 differently for the purposes of ranking or advancing potential applicants, then the platform is liable for employment discrimination. The hiring platform would likely argue that the algorithm in question is editorial in nature and that the platform is acting within its discretion as a publisher of the employment advertisement in question. The hiring platform could argue that applicant resumes are third-party content and, as such, the hiring platform would have no liability under Section 230 as it is solely moderating content within its editorial discretion. Although this example focuses on employment, you could easily change this scenario to platforms that match housing and credit applicants to homes or financial institutions and still encounter the same Section 230 problem before the court could even reach the merits of a case.

    This is not just a dominant platform problem; this is a digital marketplace issue that has little to do with the size of the platform but rather with the broader digital platform economic structure, as both the lending and employment platform marketplaces are incredibly competitive. Unfortunately, competition in the digital platform marketplace has not led to greater transparency or industry-led regulatory procedures. This lends credence to the idea that competition policy alone within the context of the digital platform marketplace is not enough to alleviate the harms that can be caused by the digital platform marketplace and that the legal system and agency guidance are necessary to regulate these marketplaces.

    Current Limits of Section 230 in the Courts

    In general, if the alleged discriminatory behavior relates to how a platform curates, edits, distributes, or displays third-party content, or what audiences it chooses to show it to, Section 230’s prohibition on treating a platform as a “publisher” of third-party content can block a claim. There are certain kinds of civil rights cases where Section 230 immunizes platforms for behavior that would be unlawful in any other context. In other cases, 230 is not an insurmountable barrier, but plaintiffs have to plead and argue their cases carefully to avoid having their cases blocked. 

    For example, if a platform carried an advertisement featuring only white models for an apartment building, a non-white person may see that apartment advertisement as discriminatory. That is a legitimate claim under the Fair Housing Act, and a print publication, as well as the advertiser, could be liable as in Ragin v. New York Times. That case found the New York Times liable and included a prescient quote from the Supreme Court’s decision in Zauderer, noting that, “the free flow of commercial information is valuable enough to justify imposing on would-be regulators the costs of distinguishing the truthful from the false, the helpful from the misleading, and the harmless from the harmful.” In a platform context, however, that case would not come out the same way. It would likely be dismissed because a platform cannot be held liable as a publisher or speaker of third-party content (though the discriminatory advertiser itself could still be liable).

    That’s not to say that a platform can never be liable for conduct that involves third-party content. A court might find, for instance, that a platform is in fact the “co-creator” of the allegedly discriminatory content. This was was the same analysis that the court did in Fair Housing Council of San Fernando Valley v. Roommates.com, where the court found that Roommates.com was liable under Section 230 for violations of the Fair Housing Act insofar as it required potential applicants to select their gender, sexual orientation, and marital status when looking for a potential roommate. That said, the court did find that Roommates.com was protected from liability by Section 230 in the user comment section where users described, in a sometimes prejudiced manner, the kind of roommate they wanted. Even in the face of civil rights liability, the court distinguished between content created or co-created by the platform and third-party content which, even if discriminatory, entitled the platform to be protected from liability under Section 230. 

    It is not always clear where “publisher” activity by a platform stops and something else begins. Even with respect to third-party content, the success of a 230 case can also depend on how the platform is being held liable. Conduct that in some way relates to third-party conduct might not be “publisher” or “speaker” activity, and holding a platform liable would therefore not run afoul of 230. For example, in the HomeAway.com v. City of Santa Monica case, the court held that HomeAway was responsible for complying with the Santa Monica local ordinance as the court found that the ordinance regulated transactions and speech. Similarly, platforms must follow privacy laws, even when they are gathering data on viewers of third-party content. While potential Section 230 defenses weren’t litigated, the Federal Trade Commission’s settlement with YouTube over Children’s Online Privacy Protection Act (COPPA) violations makes it clear that there is liability. As John Bergmayer states, “[i]n both cases the platform can continue to host the content in question but may have to change its business practices and other behaviors . . . Congress did not guarantee to YouTube or any other business its free choice of business model.”

    Although plaintiffs have found paths to holding platforms liable in some cases, the legal hurdles are, at a minimum, much higher with respect to platforms than other kinds of business. While scholars such as Pauline Kim and Spencer Overton have stated that it should be possible to hold platforms liable for civil rights violations like employment discrimination and voter suppression, the pleading requirements are onerous and not all platform behavior would be covered. There is a significant chance that the discriminatory behavior they describe would still be seen by a judge as being a form of publication, which arguably includes all forms of information distribution. When combined with how the court has interpreted trade secret law, plaintiffs and even the government can have an arduous time overcoming the necessary barriers to bring a colorable civil rights lawsuit forward, even under the best of circumstances. 

    Moreover, courts have been asked to adjudicate the liability of platforms under civil rights law on very narrow legal questions that sometimes have nothing to do with the underlying discrimination itself. This is due in large part because there is no digital regulator or regulatory clarity from agencies of jurisdiction on how platforms should comply with broader civil rights law. Apart from specific cases, the existence of 230 makes it unclear who is responsible for the redress of real civil rights harms and can signal that these harms are to be tolerated. This is both morally and legally dubious as there should be no argument over whether platforms should be in compliance with civil rights law: The answer should be a resounding yes. 

    Why Exempting Civil Rights Law from 230 May Not Be the Best Approach

    While an unpopular opinion among some, the fundamental ideas behind Section 230 around third party speech are still sound. Online platforms are not like publishers that can vet and stand behind every user post, and we want online platforms to have a free hand to moderate content without fear of liability for what they take down. A regime where platforms are responsible for third-party discriminatory conduct could very easily make platforms chill the speech of their users for fear of liability. We have evidence that this would likely be the case as seen in platform’s struggles to curb COVID-19 misinformation. Current content moderation AI is not as sophisticated as some of the platforms would like us to believe, especially when moderating the content of BIPOC people. Complicating this even further is that the roles of the platform and the user (employer, realtor, financial institution etc.) are not always clear. Did the user engage in the discriminatory action with the tools provided by the platform or did the platform present discriminatory tools to an unknowing user? A recent study showed that even when given neutral advertisements, Facebook showed different ads to different groups at different rates even when controlled for population, which highlights that even under the best intentions there may be a need to prioritize the platform’s liability as opposed to the third-party content of the advertiser or user.

    One proposal that tries to address these concerns is to separate a platform’s role in merely hosting third-party speech from a platform’s role in “amplifying” content, or in targeting it to specific users. The platform would keep 230’s immunities for the former while losing it for amplification. There are practical problems with this approach. First, how do you properly define “amplification” and “targeting”? Recent attempts at defining those terms are either too broad (and would incorporate too much activity) or too narrow (and do not cover the intended conduct). Secondly, in many cases the underlying conduct that is proposed to be exempted from 230 is not unlawful to begin with (though this would not be the case for some civil rights claims). Finally, there are legitimate and beneficial uses of targeting, like get-out-the-vote campaigns that benefit BIPOC communities. Threading this particular statutory needle is better left to regulators over Congress since regulators are able to faster adapt to changing market forces.

    In short, specific statutory or content-based exemptions to 230 might not be the best approach to properly regulate the harms associated with platforms. Carving out exemptions from 230 based on subject matters (civil rights, CSAM, hate speech, etc.) that are seen to be “important” could have no end, and a law that needs all these various exemptions should arguably be addressed more systematically. By focusing on the conduct over the content, it is much easier for Congress to legislate these changes without putting a thumb on the ideological scales and easier for the industry to enact the proposed reforms more broadly. While facilitating industry compliance should not be at the top of the list of concerns, particularly with respect to civil rights law, by focusing on the conduct instead, enforcement and compliance become easier.

    Removing 230 Protection for Advertisements Is a Preferable Approach

    Exempting all ads from 230 is a potential solution to some of the concerns highlighted above. While it is true that small publishers do not always control the ads that users see on their platforms, publishers should generally be responsible for content they accept money to publish and distribute. This business relationship should have a standard duty of care that encompasses the same obligations that any other publisher would have. What content this applies to would be clear, and there can be increased requirements that paid-for content be conspicuously disclosed and separate from organic content. This would not mean that platforms would be responsible for the content that shows up next to an ad; rather, they would only be responsible for the content of the advertisement itself and its distribution.

    A standard critique of Section 230 reform ideas holds true: The harmful content that legislators and activists look to target in 230 reforms is not illegal, and thus exempting the platforms from 230 liability for that content wouldn’t actually make the platforms liable. This applies to many “harmful” ads as well. Under this proposal, however, the increased liability for some content, like scams or fraud, could incentivise platforms to do more terms-of-service enforcement and be more selective in whom they do business with. In the context of civil rights, there is clear liability for publisher functions like deciding who gets to see what ad and who does not. A generalized ad exemption for 230 could address much of the harmful behavior that a straight-up civil rights exemption would cover, and would do more besides. As online advertising makes up a majority of ad dollars spent, there is a clear need to make sure that the market is compliant with the law.

    This proposed change to 230 is content-neutral and could have beneficial knock-on effects for issues like voter suppression and fraudulent advertising (pyramid schemes, counterfeit pharmaceuticals, etc.). If combined with a requirement such as the PACT Act’s incentivization for platforms to take down material found to be illegal by a court, this could change the way that platforms engage in content moderation, as platforms incentivise engagement for ad revenue. If the ads that are shown carry more liability, then that may mean a change in content moderation incentives for large platforms.

    We are not saying that this is the only approach but rather a thoughtful means to adjust Section 230 that would change platform incentives. Another angle could be to distinguish the outer bounds of what counts as “publisher” activity like the court in HomeAway did, making it so completing transactions that relate to content would not be covered by Section 230. However, this could end up being another difficult line to draw that would be better left to judicial development over legislation.

    Integrating 230 and Civil Rights Analyses

    Advocates and legislators need to take civil rights laws into consideration when looking at Section 230, and even more broadly platform regulation, as it shows when they do not. If racial justice is going to be a broader part of our collective vocabulary on both sides of the political spectrum, we need to keep BIPOC communities front of mind as we think about what content moderation and platform accountability look like through the lens of Section 230 and its important protections.

    This kind of change to Section 230 also focuses on how some of the real harms that come from the platforms, which the #StopHateForProfit campaign highlighted, and the lack of transparency that platforms give users. This is not about user content, but the monetization of discriminatory content for profit by platforms. Publishers already have liability for advertisements. By focusing on the business transaction itself, this change to Section 230 would highlight the kind of unlawful discrimination that truly hurts users: not “viewpoint” discrimination, but rather the limiting of opportunities that can have generational impacts on the lives of people. To quote the great Fannie Lou Hamer, “no one is free until we are all free.” By taking meaningful steps to address how Section 230 intersects with civil rights enforcement, we take a major step in setting everyone free.

    Image credit: Keith Helfrich on Unsplash