Could the FCC Regulate Social Media Under Section 230? No.
Could the FCC Regulate Social Media Under Section 230? No.
Could the FCC Regulate Social Media Under Section 230? No.

    Get Involved Today

    Last week, Politico reported that the White House was considering a potential “Executive Order” (EO) to address the ongoing-yet-unproven allegations of pro-liberal, anti-conservative bias by giant Silicon Valley companies such as Facebook, Twitter, and Google. (To the extent that there is rigorous research by AI experts, it shows that social media sites are more likely to flag posts by self-identified African Americans as “hate speech” than identical wording used by whites.) Subsequent reports by CNN and The Verge have provided more detail. Putting the two together, it appears that the Executive Order would require the Federal Communications Commission to create regulations designed to create rules limiting the ability of digital platforms to “remove or suppress content” as well as prohibit “anticompetitive, unfair or deceptive” practices around content moderation. The EO would also require the Federal Trade Commission to somehow open a docket and take complaints (something it does not, at present, do, or have capacity to do – but I will save that hobby horse for another time) about supposed political bias claims.

    (I really don’t expect I have to explain why this sort of ham-handed effort at political interference in the free flow of ideas and information is a BAD IDEA. For one thing, I’ve covered this fairly extensively in chapters five and six of my book, The Case for the Digital Platform Act. Also, Chris Lewis explained this at length in our press release in response to the reports that surfaced last week. But for those who still don’t get it, giving an administration that regards abuse of power for political purposes as a legitimate tool of governance power to harass important platforms for the exchange of views and information unless they promote its political allies and suppress its critics is something of a worst case scenario for the First Amendment and democracy generally. Even the most intrusive government intervention/supervision of speech in electronic media, such as the Fairness Doctrine, had built in safeguards to insulate the process from political manipulation. Nor are we talking about imposing common carrier-like regulations that remove the government entirely from influencing who gets to use the platform. According to what we have seen so far, we are talking about direct efforts by the government to pick winners and losers — the opposite of net neutrality. That’s not to say that viewpoint-based discrimination on speech platforms can’t be a problem — it’s just that, if it’s a problem, it’s better dealt with through the traditional tools of media policy, such as ownership caps and limits on the size of any one platform, or by using antitrust or regulation to create a more competitive marketplace with fewer bottlenecks.)

    I have a number of reasons why I don’t think this EO will ever actually go out. For one thing, it would completely contradict everything that the FCC said in the “Restoring Internet Freedom Order” (RIFO) repealing net neutrality. As a result, the FCC would either have to reverse its previous findings that Section 230 prohibits any government regulation of internet services (including ISPs), or see the regulations struck down as arbitrary and capricious. Even if the FCC tried to somehow reconcile the two, Section 230 applies to ISPs. Any “neutrality” rule that applies to Facebook, Google, and Twitter would also apply to AT&T, Verizon, and Comcast.

    But this niggles at my mind enough to ask a good old law school hypothetical. If Trump really did issue an EO similar to the one described, what could the FCC actually do under existing law?

    Forget the Other Issues? What Is the Best Case for the FCC?

    Let’s forget the question of whether the President can issue an executive order to an independent agency and pretend the FCC thought this up on its own. Let us also set aside the First Amendment issues. I have written a fairly lengthy description of the First Amendment issues associated with trying to regulate how platforms do content moderation in chapter five of The Case for the Digital Platform Act (available for free here, and chapter five available for independent free download here). Chapter five also contains charts, tables, and checklists to help keep track of the constitutional issues. So, if you are interested in the potential First Amendment problems (and how a sufficiently motivated FCC could try to route around them), I refer you there.

    I will also set aside any questions about the FTC, and whether it has any authority to address the issues Trump wants it to address in light of the language of Section 230. (You can read John Bergmayer’s blog post on Section 230 for how it does or doesn’t shield platforms from anticompetitive, unfair, or deceptive practices.) I want to focus on whether, under the FCC’s existing statutory authority, it can do anything like what the descriptions of the draft EO would require. Spoiler alert: not really.

    FCC Sources of Authority for Rulemaking: Sections 4(i), 502, and 416.

    First question: Does the FCC have authority to make rules relevant to Section 230 (47 U.S.C. 230), can they create penalties, and does anyone who doesn’t hold an FCC license of some kind have to obey an order of the FCC? Without these three things, the game is over before we even get to what Section 230 actually says. Happily for the analysis, we can clear this hurdle pretty easily.

    The courts have long recognized the FCC has general rulemaking authority to make rules about any statute under its jurisdiction. The usual citation is to Section 4(i) (47 U.S.C. 154(i)), which the D.C. Circuit and others have referred to as the “necessary and proper” clause of the Communications Act. It empowers the FCC “to make such rules and regulations . . . may be necessary in the execution of its functions.” Some people get confused and think Section 4(i) is just about “ancillary authority,” which I won’t get into now. They should read the D.C. Circuit’s opinion in MPAA v. FCC, which explains the difference. Still, if you are one of those confused people who likes to think 4(i) is limited solely to administrative stuff courts have found the FCC also has general rulemaking authority under Section 201(b) (47 U.S.C. 201(b)) (and, FWIW, Section 303(r)).

    Section 502 (47 U.S.C. 502) makes it a violation of law subject to a penalty of $500/day per violation to violate an FCC rule. Finally, Section 416 (47 U.S.C. 416) imposes an obligation on “any person” – regardless of whether or not they hold a Commission license – to obey an order of the FCC.

    So we have the basic building blocks for the FCC to do its thing. But, as the D.C. Circuit made clear in MPAA v. FCC and ALA v. FCC, the authority to make rules isn’t enough on its own. The FCC has to be acting pursuant to some statute to do its thing. Which brings us to the ever popular and frequently misunderstood 47 U.S.C. 230.

    What Does Section 230 Say That Would Allow the FCC to Do What the Trump EO Would Require?

    As always, we start with the plain language of the statute. The relevant provision, Section 230(c)(2)(A), states:

    (2) Civil liability No provider or user of an interactive computer service shall be held liable on account of—

    (A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected;

    An agency cannot rewrite the statute. It must obey the plain text of the statute. In addition, the FCC doesn’t actually have any role in implementing or enforcing the statute. This is entirely self-executing. It grants an “interactive computer service” (defined in Sec. 230(f) in a way that includes the social media platforms Trump wants to reach) immunity to any sort of civil liability for taking down content for a variety of reasons, including finding the content “objectionable.”

    The good news for the FCC is that the statute explicitly covers the relevant entities. That solves our jurisdictional problem. Usually, the FCC only has jurisdiction over interstate communications, and doesn’t have much direct regulatory authority over “information services.” But here, the statute provides jurisdiction on its plain face. We therefore don’t have to come up with any fun alternate theories of jurisdiction (most of which are barred by RIFO).

    Unfortunately for the FCC, the good news ends there. For the FCC to regulate, there needs to be some direction from Congress. This direction can take the form of a direct command, such as a command to make rules “in the public interest,” or a command to protect people’s privacy (HINT! HINT!). Alternatively, the delegation of authority to regulate can come indirectly through “ambiguity.” Under Chevron U.S.A.,Inc. v. NRDC (aka the “Chevron doctrine”), when Congress uses ambiguous language in a statute, it delegates power to the agency to “fill in the gaps” and resolve the ambiguity. That doesn’t give the agency permission to do whatever it wants. The resolution of the ambiguity cannot be either contrary to the clear intent of Congress, or utterly unsupported in light of the overall purpose of the statute and traditional rules of statutory interpretation.

    (I’m going to skip the whole business about judicial deference because that would add another 1500 words or so and, as I mentioned above, I don’t think this EO will ever happen.)

    Looking over the plain language of the statute, we don’t see any explicit commands to the FCC to do anything. The statute is directed to courts to not hold providers liable for removing content, even if the content is constitutionally protected. It doesn’t direct the FCC to enforce anything. To the contrary, Section 230(c)(2)(A) is a “do not enforce, even if you normally could enforce,” so the FCC has no hook to define how to apply the immunity granted by the statute.

    What about indirect delegation/ambiguity? Again, I’m going to skip all the stuff the FCC previously said in RIFO about how Section 230(b) directs the FCC to keep its regulatory paws off the interwebz, as I’m doing the best case scenario for the FCC. But if we were to apply the logic of the RIFO, it would be game over for any FCC regulations. Why? Even if we found an ambiguity, the RIFO’s interpretation of Section 230(b) would make imposing regulations under 230(c) inconsistent, arbitrary, and capricious.

    Looking through the statute, we do find one potentially ambiguous phrase. The statute only protects “good faith” removal of content that the provider considers “obscene, lewd, lascivious, filthy, excessively violent, harassing or otherwise objectionable.” Could the FCC leverage the ambiguity in “good faith” to somehow impose a “no political bias” rule on sufficient large platforms (while presumably leaving even larger ISPs like AT&T alone)? Could the FCC then use its powers under Sections 416 and 502 to order platforms to stop “discriminating” against conservative views and impose fines for disobeying?

    As R.E.M. famously put it, “you can’t get there from here.”

    Once again, I’m going to ignore all the stupid things the FCC said in the past about interpreting the words “good faith” in the context of the cable restransmission consent statute. As I wrote in what became known as the “Man Pants” blog post, the FCC interpreted “good faith” to mean “deny the FCC authority to actually do anything.” The only reason I mention this is so you all can get a sense of how much, as a practical matter (if it ever came to that), the FCC would need to reverse itself or otherwise explain away. But even so, “you can’t get there from here.”

    Why Doesn’t “Good Faith” Give the FCC What It Needs?

    Part of the problem is that the plain meaning of “otherwise objectionable” means the provider can limit access or remove content for pretty much any reason that doesn’t otherwise violate the law. That includes discrimination based on political point of view, or just “no Twitter for you!” This would appear to exclude unfair, deceptive, or anticompetitive motivations since the motive for limiting access is not any objection to the content but to engage in an illegal activity. So the FCC could “clarify” this if it wants. But so what? The statute does not give the FCC any enforcement role. Even if someone does engage in removing content for purely anticompetitive reasons, it isn’t the FCC that would go after that. That goes to the FTC or the Department of Justice, or is raised as an issue in private litigation.

    Nor does anything in “good faith” let you distinguish between the types of “interactive services,” or distinguish based on size. Something either is good faith or it isn’t. If it is good faith for little baby platform, it is good faith for giant platform. Likewise, nothing about defining “good faith” means you can add a notice requirement. If the FCC actually had a role in enforcing Section 230(c), it could require that an “interactive service” needs to notify anyone whose content is removed or otherwise limited. But the statute gives the FCC no role in enforcement either express or implied, and the FCC cannot write one for itself absent delegated authority. The FCC does not need to impose a notification rule (or even a “transparency rule”) as “necessary in the execution of its functions” (to quote Section 4(i)) because it has no “functions” here to execute. At most, it has authority to define for courts and civil enforcement agencies what constitutes “good faith” when they carry out their functions.

    If the words “otherwise objectionable” were ambiguous, we might reach a different result. But they aren’t. “Objectionable” is an entirely subjective standard – that’s why it needs “good faith” to modify/limit it so we can still enforce standard competition and consumer protection law.

    What About Other Parts of Section 230?

    The leaks all suggest that the draft EO is directed at 230(c)(2)(A), which makes sense, because that is the only part of the statute that even arguably gives the FCC something to do. But it is worth pointing out the difference between 230(c)(2), and 230(c)(1), which flatly states, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” This is the section of the law, not 230(c)(2), that many prominent conservatives (and liberals) have been complaining about, because this is the section of the law that states that an online service cannot be held liable for speech torts, such as defamation — even if the service moderates content heavily, even if it arbitrarily takes down posts it doesn’t agree with on political grounds, and so on. 230(c)(2) has some bearing on these issues, as well, but it goes further. While (c)(1) states that a service cannot be held liable as a speaker or publisher of content it leaves up, (c)(2) shields them from liability for content they take down, and not just from speech torts, but from, for example, claims of economic harm. (c)(2) also shields interactive computer services that don’t host or even transmit content at all, such as the creators of parental blocking software.

    Unlike 230(c)(2), c(1) has no “good faith” requirement. It is unconditional. Nothing the FCC does or says about “good faith” has any bearing on this portion of the law. Neither does it say that interactive computer services are not publishers — it simply says that cannot be treated as publishers for the purposes of liability. They can act as and call themselves publishers or media companies with respect to “information provided by another information content provider” all day long and it still doesn’t make them liable for such content. (Of course they are, as they always have been, liable for their own content, just as The New York Times is liable for the stories it publishes, but not for user comments.) Also, under the statute as it is written, and has been consistently interpreted by the courts, even editing, selecting, promoting, or endorsing content originally created by a third party still does not open a service up to liability for speech-related torts. In other words, the repeated attempts to “gotcha” Facebook or YouTube into “admitting” that they are publishers is legally meaningless. If elected officials don’t like this, they have to go through the trouble of actually passing a law — not that every legislative proposal is itself a good idea. The President can’t merely issue an order to agencies to figure out some way to make a statute go away.

    Conclusion

    I’ve mused about some additional fairly arcane theories, such as trying to somehow use forbearance to eliminate Section 230. But none of them work, and I am not going to write even more thousands of words on even more tenuous and outlandish theories. That’s why I assumed everything possible to assume in favor of FCC regulatory authority here. And even making the most favorable assumptions possible, “you can’t get there from here.”