In late June, the House Judiciary Committee voted to approve, on a bipartisan basis, a six-part package of legislation designed to restrict dominant digital platforms from leveraging their power to disadvantage competitors or promote their own lines of business unfairly. We hope the bills will eventually have counterparts or companions from the Senate.
Our purpose here is to assess the impact of these bills on the platforms’ content moderation policies and practices. We want to be clear that this package of legislation is not primarily focused on content moderation; it is focused instead on promoting competition and addressing the power of the largest platforms by creating new antitrust and pro-competition laws that suit their unique dynamics. The bills are based on an extensive investigation into the platforms’ market dominance and anti-competitive practices, not specifically their content moderation practices. That said, we would want any evaluation or evolution of the bills to include their potential impact on the role of platforms in hosting and managing user content, and, by extension, on the quality and safety of information available to consumers.
First, in aggregate, the six House antitrust bills would serve to reduce the dominant role each platform plays in hosting and amplifying user-created content. With more competition in the social media, search, e-commerce, app store, and other platform markets, consumers could vote with their feet (or more accurately, their fingers) for the platforms whose content moderation policies they value. That inherently means that the power of the dominant platforms in political and social discourse will be diminished, and the power of the individual enhanced. Importantly, we can’t know that more competition will necessarily result in a “race to the top” in terms of content moderation that protects user safety. In fact, it may result in new platforms that actually cater to those who seek out harmful, but legal content — but the reach of such content could be diminished since the distribution of content will be more fragmented, and it increases the likelihood that only those users with an interest in content of that nature will see it.
But our focus here is on the two bills in the package with the most direct potential impact on the platforms’ policies and practices related to content moderation: those focused on nondiscrimination and interoperability. Overall, we believe these bills allow the designated platforms to continue to apply their own terms of service and community standards to any content that crosses their platforms. That means that love them or hate them, the digital platforms can continue to apply their own editorial discretion. (Those who want to address the question of content moderation directly may appreciate our Section 230 Principles to Protect Free Expression Online.)
H.R. 3816 American Choice and Innovation Online Act: Nondiscrimination
The American Choice and Innovation Online Act (H.R. 3816) primarily seeks to address anticompetitive discrimination, a strategy that platforms use to protect their gatekeeper power. That is, it seeks to prohibit the most powerful platforms from engaging in conduct that anti-competitively advantages or “preferences” their own products or services, or disadvantages other business users, or anti-competitively discriminates among “similarly situated” business users in their markets. It has to accomplish that goal while preserving the platforms’ First Amendment rights as codified in Section 230 — that is, it has to preserve the right of platforms to moderate the content they host and display, regardless of its original source. It’s a hard problem, but there are prior examples to learn from on how to address it carefully, and we believe the bill achieves this balance.
For example, in the program carriage provisions of the 1992 Cable Act, Congress sought to prevent cable systems from “conduct the effect of which is to unreasonably restrain the ability of an unaffiliated video programming vendor to compete fairly by discriminating in video programming distribution on the basis of affiliation or nonaffiliation of vendors” — that is, cable companies could base their carriage decisions on content, but not on whether or not the content was affiliated with a particular media or cable company. In practice, even though the law was written to prohibit certain conduct regardless of the cable company’s motivation, the law has been interpreted to not apply as long as the cable company can point to some plausible reason for its carriage decision other than the affiliation status of vendors.
The American Choice and Innovation Online Act seeks to avoid that shortcoming by including neither an “intent” standard, nor language that could be interpreted to be about the platform’s intent in its decision-making. This is not because intent is irrelevant; it’s because intent is hard to prove. If you create a law that prohibits certain conduct, but only if an enforcer or other plaintiff can meet an almost-impossible evidentiary burden (that is, they can prove what the platform intended in its conduct), you haven’t really done much. Thus, many of the new House bill’s requirements are categorical, and do not allow a platform to continue the same behavior simply by pointing to some harmless motivation.
The Supreme Court held that the 1992 Cable Act’s program carriage rules are constitutional, in part because they left the editorial prerogatives of cable companies intact. They were under no obligation to carry programming they disagreed with or was of low quality. They were just prohibited from commercially-motivated discrimination.
In our view, the American Choice and Innovation Online Act would pass constitutional muster for the same reason. That is, provided they apply their editorial standards to similarly situated business users consistently, the bill largely preserves the platforms’ content moderation rights as described in Section 230. (Exceptions would be if the terms and standards themselves are obviously contrived and pretextual.)
The bill also creates a defense for platforms in the event of legal challenges. For example, if they can demonstrate that their decision –even refuse to deal entirely with a particular company– is because that company’s content or services violate the generally-applicable terms the platform applies to everyone (including itself), then it will have a strong argument that its conduct “would not result in harm to the competitive process.” This protects platforms’ content moderation rights without the pitfalls of an intent standard as we detailed above. What matters is the effects on competition.
So, subject to the caveats above, nothing in the language of H.R. 3816 prohibits a platform from applying the same terms of service, consistently, to content, regardless of the platform it originated with: doing so does not constitute “disadvantaging” or “excluding” or picking winners and losers among “similarly situated” competitors. If certain kinds of disinformation, hate speech, incendiary rhetoric, or otherwise harmful content are precluded by the platform’s terms of service or community standards, those standards can be applied to all content, services, or products that cross the platform.
We want to acknowledge some groups’ concerns that this bill makes it difficult or impossible for covered companies to deplatform and remove from their sites any business that allows harmful content. They actually read these provisions more broadly as a form of common carriage, where a platform would be required to carry content or products it otherwise would choose not to; for example, they contend that the law would allow “unmoderated” apps like Infowars and Parler to argue that they are treated differently from other “similarly situated” and “legitimate” businesses if they or their content are disallowed from Apple’s app store or from Facebook.
While we don’t agree with this assessment, there may be ways to clarify the bill so that its intent and purpose are unambiguous to everyone. One suggestion that has been made is to strike a particular provision that prohibits platforms from engaging in any conduct that may “discriminate among similarly situated business users,” section 2(a)(3) of the bill. However, we believe 2(a)(3) is an especially important provision in the bill, for reasons related to competition policy: It ensures that market players have a fair shot at competing against the platform itself. We’ll be sharing our thoughts on section 2(a)(3) and the American Choice and Innovation Online Act more broadly in an upcoming blog post.
Another suggestion that has been made is to make the whole bill subject to the consumer welfare standard found in antitrust law. But again, this would hugely diminish the impact of the bill, since the entire idea of the bill is to get at conduct current antitrust law has failed to successfully police. Putting everything through a consumer welfare lens will make it much more likely the market-concentrated status quo remains unchanged. We have decades of experience in how courts’ narrow interpretation of the “consumer welfare” standard stymies legitimate enforcement and we simply cannot allow the same thing to happen to these new regulatory tools. We will share our thoughts on this in the upcoming blog post, as well.
In summary, we believe the most accurate interpretation of H.R. 3816 is that it does not interfere with the ability of platforms to moderate objectionable content. We are also aware that others read the law differently, so some clarifying language might be in order. Strengthening the bill’s existing defense for conduct that doesn’t harm the competitive process, providing more guidance as to what the phrase “competitive process” means in practice, limiting the private right of action, or simply clarifying the language of provision 2(a)(3), may be more effective strategies to make clear that this is not a “must carry” standard, while still promoting competition against dominant digital platforms.
H.R.3849 ACCESS Act of 2021: Interoperability
Another one of the antitrust bills, the ACCESS Act of 2021 (H.R.3849), has an important interaction with the American Choice and Innovation Online Act. The bill seeks to allow users to more easily transport their data across platforms and even communicate with users on other platforms. It requires that designated platforms develop programming interfaces that allow “interoperability” with businesses that compete, or may compete, with the platform. That means users would be able to move content they create to new platforms.
This mandate for interoperability across platforms does not require that a platform ignore its own terms of service, either. So our view in regard to H.R. 3849’s impact on content moderation is the same as the one we hold for the American Choice and Innovation Online Act: It leaves the editorial prerogatives of covered platforms, expressed through their terms of service and community standards, intact and allows platforms to consistently apply these standards to content regardless of its original source platform.
Public Knowledge has consistently maintained that competition in the marketplace leads to better options for consumers. Neither the nondiscrimination nor interoperability bills, alone or together, will force changes in the platforms’ content moderation policies or practices — for better, or for worse. There are opportunities to make some changes in language to ensure that outcome in the bills as well as in the courts. But we believe the package of antitrust bills, in aggregate, will have the effect of creating more competition in social media (as well as search, advertising, e-commerce, and app stores). This will reduce the dominance — and therefore, potential for harm — of the existing platforms, create more competition for consumers’ time and attention (which may drive changes in platforms’ content moderation), and allow new options that deliver on the level of self-expression and safety that consumers want.