What Section 230 Is and Does — Yet Another Explanation of One of the Internet’s Most Important Laws
What Section 230 Is and Does — Yet Another Explanation of One of the Internet’s Most Important Laws
What Section 230 Is and Does — Yet Another Explanation of One of the Internet’s Most Important Laws

    Get Involved Today

    This is the first blog post in a series about Section 230 of the Communications Decency Act. You can view the full series here.

    Section 230 of the Communications Decency Act immunizes internet platforms from any liability as a publisher or speaker for third-party content — and is one of the most important and wide-reaching laws that affect the internet. With the increased attention on online platforms in the past few years, it has become one of the most controversial. It’s also widely misunderstood, or misconstrued, both by its supporters and detractors. Much of the discourse around this law has focused on two extremes — on the one hand, from those who want to defend it at any cost and view it as a general charter against platform regulation, and on the other hand, from those who simply want to repeal it without realizing what the consequences of this could be. At the same time, both the press and politicians tend to either overstate or misunderstand what 230 does.

    To be clear, Public Knowledge believes that simply repealing Section 230 would be a mistake. Harold Feld’s recent book explains this (among many other things). At the same time, internet platforms should not be exempt from the kinds of obligations that other kinds of businesses must meet, and Public Knowledge supports greater oversight and regulation of online platforms generally. Section 230 as it stands today is not sacrosanct, and new legislation that changes the obligations of platforms is likely necessary. A follow-up post to this one will explain what those changes might look like.

    But before we get there, this post will explain what 230 does, why it was enacted, and why its wholesale repeal would likely be counter to the aims of those who view 230 as an obstacle to greater tech accountability.

    Reading the Statute Helps: What Section 230 Says

    The most relevant part of Section 230 is subsection (c), which states:

    (c) Protection for “Good Samaritan” blocking and screening of offensive material

    (1) Treatment of publisher or speaker

    No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

    (2) Civil liability

    No provider or user of an interactive computer service shall be held liable on account of –

    (A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or

    (B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).

    The definition of “information content provider” is also relevant: “Any person or entity that is responsible, in whole or in part, for the creation or development of information provided through the Internet or any other interactive computer service.”

    Subsection (1) immunizes internet platforms from any liability as a publisher or speaker for third-party content. (A platform can still be held liable for its own content, of course. So, for example, the Wall Street Journal could be liable for one of its own articles online, but not for the comment section.)

    It seems pretty simple to understand what it means that a platform cannot be treated as a “speaker” of third-party content — it’s that you can’t simply put the platform “in the shoes” of the speaker, just because it hosts and disseminates potentially tortious material. It also means that a platform cannot be held liable in its role as a publisher, either under theories that hold publishers liable as speakers, or for the exercise of editorial discretion.

    But it is worth exploring what it means that a platform can’t be held liable as a publisher, since publishers have an editorial and expressive role in disseminating content originally written by others — they are not merely transmitters. Section 230 protects platforms from liability as publishers — but it still allows them to act as publishers. As the 4th Circuit Court of Appeals said in an early case applying Section 230, “lawsuits seeking to hold a service provider liable for its exercise of a publisher’s traditional editorial functions — such as deciding whether to publish, withdraw, postpone or alter content — are barred.” A publisher’s role also includes reviewing content, and deciding which content to highlight. This means that a platform is still protected from liability for the contents of user-submitted material, even if it chooses to highlight and promote that material, or even use it in online advertisements. As the 9th Circuit held, “proliferation and dissemination of content does not equal creation or development of content.”

    It often surprises people that Section 230 permits a platform to alter user-posted content without incurring liability for it. But this simply means that a platform, after content is posted, can correct the spelling of a post, replace swear words with asterisks, and even delete a problematic paragraph. Of course “edits” can at some point cross the line into the development of new content — for example if a platform adds libelous material of its own to a user’s post. But editing is not authorship. As long as the underlying “material” or “information” was created or developed and then “provided” (read the statute again) by a third party, Section 230 shields the platform.

    In light of current debates over the role of major online platforms, one of the most important and basic things to understand about Section 230 is that it authorizes platforms to exercise editorial discretion with respect to third-party content without losing the benefit of the law, and that this includes promoting a political, moral, or social viewpoint. (Even one you don’t like.) This is what the plain text, the legislative history, and the leading cases all say. A pro-Trump messageboard is still covered by Section 230 if it deletes all anti-Trump posts, and if Twitter or Facebook chose tomorrow to ban all conservatives, or all socialists, Section 230 would still apply. Whether this is good policy or good politics is a different discussion. But this is the law.

    Because Section 230 is so broad, the fact is that platforms usually win cases that seek to find clever ways to hold them liable for publisher-type functions. So, for example, in the recent Herrick v. Grindr case, an attempt to argue that Grindr was liable for furnishing a defective product failed, because the specific way that Grindr was alleged to be defective related to its editorial functions as a publisher. (Again, this is a comment on the law as it stands, not a perspective on what the law should be.) Even courts lack the power to order platforms to take down content that has been found to be defamatory, because, as one court found, an “action to force a website to remove content on the sole basis that the content is defamatory is necessarily treating the website as a publisher, and is therefore inconsistent with section 230.” Again, whether a legal shield of such strength makes sense from a policy perspective is an interesting question, but the baseline for any such discussion has to be an accurate understanding what the law actually says today.

    Subsection (2) protects online platforms from liability arising from content moderation decisions, even if that liability has nothing to do with “publishing” or “speaking,” provided that the moderation decisions were undertaken in good faith. It also applies to a platform restricting access to its own content, not just third-party content. As for “good faith” — first, it’s important to understand that neither this nor anything in subsection (2) is a condition on subsection (1). A platform’s immunity from treatment as a publisher or speaker of third-party content is unconditional. Rather, subsection (2) protects platforms from things such as claims from people who are upset that their content was removed from a platform, who might be able to frame their complaint as a non-speech tort. Perhaps a platform could be liable for restricting access to material if it could be shown that it did so maliciously in some way, and outside of any conceivable role as a publisher. For example, this subsection would likely not shield a platform from civil antitrust claims, or from a breach of contract argument arising from its terms of service, as these claims would presumably involve bad faith. However a platform exercising extreme editorial discretion (for example, by deliberately censoring vegans or climate change activists because it doesn’t like them) would still be protected — ”good faith” does not imply “good judgment.”

    A 2009 case from the 9th Circuit explains the different roles of these different sections well:

    Subsection (c)(1), by itself, shields from liability all publication decisions, whether to edit, to remove, or to post, with respect to content generated entirely by third parties. Subsection (c)(2), for its part, provides an additional shield from liability, but only for “any action voluntarily taken in good faith to restrict access to or availability of material that the provider … considers to be obscene… or otherwise objectionable.” Crucially, the persons who can take advantage of this liability are not merely those whom subsection (c)(1) already protects, but any provider of an interactive computer service. Thus, even those who cannot take advantage of subsection (c)(1), perhaps because they developed, even in part, the content at issue… can take advantage of subsection (c)(2) if they act to restrict access to the content because they consider it obscene or otherwise objectionable. Additionally, subsection (c)(2) also protects internet service providers from liability not for publishing or speaking, but rather for actions taken to restrict access to obscene or otherwise objectionable content.

    One final point. Section 230 has some pretty clear carve-outs for intellectual property law, and for federal criminal law. This means that an online platform can be held liable for copyright infringement for material posted by users (the DMCA, not 230 controls this), and that it can be found guilty of criminal law, even in its role as a publisher. Additionally, FOSTA-SESTA puts further conditions on the applicability of 230, and platforms very much are legally required to remove child pornography. However these exceptions are not particularly germane to the present discussion, though the limits of 230 will be discussed in a future post.

    The Legal Background and Its Connection to 230

    Most people reading this probably understand that it is possible to get sued for things you say. The protections of the First Amendment are quite broad, but someone who commits libel, invasion of privacy, intentional infliction of emotional distress, or some other tort — using words alone — can still be required to pay damages in court. The First Amendment puts quite a few guard rails on claims of this sort, which means that, for example, defamation law works differently in the United States than in some other common law countries. But even in the U.S. people can still be held to account for damage they do with words, just like they can be held to account for damage they do with a baseball bat.

    Companies can be liable for these torts, as well. The easiest case is when a publisher is publishing its own employee’s words. There, the publisher is simply seen as the speaker. Similarly, when employees commit torts in the scope of their employment, the employer is responsible under the doctrine of vicarious liability.

    The harder question is when a publisher is publishing some other person’s words. In some cases, the publisher does not have any particular duty to verify the accuracy of what it publishes, so it’s hard to hold it liable. For instance, with respect to a book that allegedly contained erroneous and dangerous information about mushroom identification, the 9th Circuit held,

    In order for negligence to be actionable, there must be a legal duty to exercise due care. The plaintiffs urge this court that the publisher had a duty to investigate the accuracy of The Encyclopedia of Mushrooms’ contents. We conclude that the defendants have no duty to investigate the accuracy of the contents of the books it publishes…. [T]here is nothing inherent in the role of publisher or the surrounding legal doctrines to suggest that such a duty should be imposed on publishers. Indeed the cases uniformly refuse to impose such a duty. Were we tempted to create this duty, the gentle tug of the First Amendment and the values embodied therein would remind us of the social costs.

    This is pertinent to online platforms because the dissemination of false and damaging information is an issue of increasing relevance. But in other situations, such as libel, the law is that publishers are liable for anything they publish, even if the actual speaker is not an employee or agent of the publisher. (If you really want to get technical, you can crack open the Restatement (2d) of Torts I’m sure you have handy, and note that defamation is defined as a publisher offense [§ 558], but that that “Any act by which the defamatory matter is intentionally or negligently communicated to a third person is a publication” [§ 577]. So there is no difference between handing your weekly column off to the newspaper and the newspaper publishing it, as far as liability goes, and in case you were wondering, simply republishing counts, too [§ 578].)

    But there is still a great deal of nuance here. Just because the same legal standard applies to both the publisher and the speaker, does not entail that any time the actual speaker commits libel, the publisher automatically does, as well. In Gertz v. Robert Welch, the Supreme Court held that under the First Amendment, there cannot be strict liability defamation offenses — the defendant must act with some kind of culpable state of mind, or mens rea. That is, the accused defamer must act negligently, or recklessly, or knowingly — something of that sort. It varies by state. The exact same false and damaging statement may be libel if the defendant writes it recklessly, but not if the defendant writes it legitimately, and for a good reason, while thinking it is true. Because the mens rea requirement must be applied for each defendant, it would still be necessary to separately establish a mens rea for a publisher defendant — you can’t simply impute the original writer’s state of mind to the publisher. In practice this might not be difficult, for example if the requisite state of mind is relatively easy to prove, such as negligence. But it is worth remembering that it might be possible in some circumstance to prove the mens rea for the publisher, but not the writer, or vice versa. (Of course a platform or a corporation does not actually have a state of mind to begin with. The law has ways of dealing with that.) Naturally, in the case of a newspaper publishing its own employees, the publisher is liable. But this is because employers in general are responsible for their employees, under the doctrine of vicarious liability, mentioned above. But courts have typically declined to find vicarious liability just because some sort of relationship exists between a publisher and a writer — the relationship must be much closer.

    I’m going into all of this because it matters a great deal to an analysis of Section 230. By saying that a platform cannot be held liable as a speaker, 230 says that you cannot put the platform “in the shoes” of a user, such that if the user commits libel, the platform necessarily does, as well. Gertz’s requirement that speech torts must have some level of associated mens rea also implies that strict liability of platforms for user-posted content would be unconstitutional. It is necessary to establish some kind of standard of care for the platform specifically, that relates to its own specific responsibilities regarding the dissemination of content.

    Why 230 Was Enacted–And Why It Should Not Be Simply Repealed

    Section 230 was enacted for a pretty straightforward reason: Early caselaw about the liability of online platforms was a mess. The two most notable cases, and the standards of liability they announced, will be discussed below.

    Cubby v. CompuServe and Distributor Liability

    One case, Cubby v. CompuServe, held that online platforms were not publishers of content, but distributors. It is worth digressing on that point for a moment.

    Traditionally, distributors of speech, such as bookstores, are only liable for that speech if they know or have reason to know its contents. So, if the possession or distribution of obscene material is unlawful (as it was in Los Angeles in the 1950s) you can’t hold a bookstore liable for merely having this material on its shelves. In Smith v. California, when this came before the Supreme Court, the Court found that prosecutors needed to show that the bookstore knew it had obscene material on its shelves. In a passage worth quoting, the Court explaining why this is, distinguishing this case from the sale of unsafe food (where the seller can be “strictly” liable — that is, even if it is unaware of the problem):

    There is no specific constitutional inhibition against making the distributors of food the strictest censors of their merchandise, but the constitutional guarantees of the freedom of speech and of the press stand in the way of imposing a similar requirement on the bookseller…. For if the bookseller is criminally liable without knowledge of the contents, and the ordinance fulfills its purpose, he will tend to restrict the books he sells to those he has inspected; and thus the State will have imposed a restriction upon the distribution of constitutionally protected as well as obscene literature…. “Every bookseller would be placed under an obligation to make himself aware of the contents of every book in his shop. It would be altogether unreasonable to demand so near an approach to omniscience.” And the bookseller’s burden would become the public’s burden, for by restricting him the public’s access to reading matter would be restricted… The bookseller’s self-censorship, compelled by the State, would be a censorship affecting the whole public, hardly less virulent for being privately administered.

    Though obscenity and bookstores were at issue in that case, the articulation of distributor liability is broadly applicable. And it was applied in Cubby. The District Court for the Southern District of New York found that distributor liability was most appropriate. After citing some of the passage above from Smith, it found that “Technology is rapidly transforming the information industry. A computerized database is the functional equivalent of a more traditional news vendor, and the inconsistent application of a lower standard of liability to an electronic news distributor such as CompuServe than that which is applied to a public library, book store, or newsstand would impose an undue burden on the free flow of information.” Finally, because the plaintiffs against CompuServe did not allege that it had knowledge of the actual contents of the material it was suing over, the court ruled in CompuServe’s favor.

    Distributor liability seems like it may have been a reasonable standard to apply to online platforms — the open question being when, exactly, should a platform “have reason to know” about the contents it carries. Could a platform just refuse to do any moderation or review of posted materials and claim 230(c)(1)-like immunity under this standard? Or would courts have imposed some common law duty to monitor? It’s an interesting speculative exercise, but it’s merely speculative, because first another case, and then Congress and Section 230 intervened.

    Stratton Oakmont v. Prodigy and Publisher Liability

    The other most notable case involving platform liability for third-party content was Stratton Oakmont, Inc. v. Prodigy Services. In that case, a New York appellate court observed that Prodigy did engage in some moderation of posted materials, and that in its marketing, it “held itself out as an online service that exercised editorial control over the content of messages posted on its computer bulletin boards, thereby expressly differentiating itself from its competition and expressly likening itself to a newspaper.” The court even quoted Prodigy as stating, “We make no apology for pursuing a value system that reflects the culture of the millions of American families we aspire to serve. Certainly no responsible newspaper does less when it chooses the type of advertising it publishes, the letters it prints, the degree of nudity and unsupported gossip its editors tolerate.” With these facts, the court held that Prodigy was liable for third-party materials, not as a distributor, but as a publisher.

    A Paradox for Platforms

    Taken together, these cases seemed to set up an unfortunate dilemma for online service providers. If they simply operated as unmoderated platforms, they would likely face relatively little liability as distributors. But, if they moderated their platforms — even just by removing abusive comments, pornography, or even pirated software — a court could potentially find that this transformed them into publishers. Thus the state of the law created a disincentive against moderation and seemed to encourage platforms to err on the side of anarchy. Ironically, one of the main criticisms of Section 230 today is that it protects platforms who do not engage in enough content moderation. But relative to the pre–230 case law, 230 is also is what permits platforms to moderate content, without fear of accruing extra liability for doing so. After all, this is why it was enacted as part of the Communications Decency Act, most of the rest of which was struck down as unconstitutional, but which was broadly aimed at scrubbing the internet of porn. And this is why 230 itself is captioned “Protection for private blocking and screening of offensive material,” with its heart, section (c), captioned “Protection for ‘Good Samaritan’ blocking and screening of offensive material.” In short, Section 230 sought to overrule Stratton Oakmont, by allowing platforms to moderate and edit material on their sites, but without doing so opening them up to lawsuits over what they took down and what they left up. It is sometimes said to be the law that allows the internet as we know it to exist — this might be overstated, but it certainly allowed more heavily curated platforms to exist, relative to the rule followed in Stratton Oakmont.

    What Might Have Been?

    Section 230 did not merely overturn Stratton Oakmont and put in place a more moderate rule like that in Cubby. Cubby announced distributor liability as the appropriate standard for platforms, not the blanket immunity from liability as a publisher or speaker that 230 enacted. It is possible that the common law would have evolved in a sensible direction, after some period of uncertainty, without Congressional intervention — perhaps distributor liability, or a form of publisher liability that more expressly centers the need to establish a separate mens rea for the platform, or something new and platform-specific would have developed. This is why it seems hyperbolic to say that, without Section 230, the internet as we know it could never have developed. Other countries took different paths, and the common law is an imperfect system that, after some detours, often hits on the right balance of duties and liabilities.

    However, this works best when the law concerning the duties of an industry, and the industry itself, can grow together. But now, in 2019, online platforms have a more important role in American life than the drafters of 230 likely ever imagined. If 230 were to be simply repealed, it is reasonable to assume that the undeveloped and outdated pre–230 case law would simply spring back into life. In many states there would be no controlling caselaw at all. This is simply an untenable situation for such an important part of today’s economic, media, and cultural landscape.

    Simple repeal could lead to unmoderated cesspools on the one hand, and responsible platforms beset by lawsuits and crippled by damages on the other. This would be disruptive both socially and economically, and while it is possible that the courts would eventually develop sensible standards following the footsteps of Cubby, a more sensible option for those who would change the legal standards governing online platforms to actually articulate what those standards should be, and to propose specific legislation, rather than assuming that the legal baseline absent 230 would lead to better results.

    ***

    A followup to this post will discuss just what such proposals could look like. While that post will not specifically endorse any of them, they are intended to illustrate how platform responsibilities could be heightened in some circumstances without creating disincentives against moderation, limiting the ability of platforms to serve as forums for free speech, or unconstitutionally encouraging platforms to embrace particular viewpoints. It will also delineate some of the outer bounds of Section 230, that show that heightened responsibilities for platforms aren’t always foreclosed by the statute. Section 230 and the legal standards surrounding online platforms and third-party content are not sacrosanct, but any changes to the law should be approached cautiously.