Facebook and Twitter Made the Right Decision. Big Tech Is Still Too Powerful.
Facebook and Twitter Made the Right Decision. Big Tech Is Still Too Powerful.
Facebook and Twitter Made the Right Decision. Big Tech Is Still Too Powerful.

    Get Involved Today

    What a week. On Tuesday, if you stayed up late enough (or woke at 3 a.m. Wednesday morning to a screaming baby, as I did) we saw Democrats win control of the Senate via a special election in Georgia. On Wednesday, we witnessed an armed insurrection on the Capitol Building, directed by President Trump in part through social media platforms, causing five deaths. On Thursday, we had a second to breathe, and then on Friday night, Twitter and Facebook suspended Donald Trump’s accounts (Facebook for only two weeks, Twitter permanently). We could not look away last week as the real-world impact of social media became glaringly clear. Our proposal for a digital platform regulator is an important tool for addressing the power of digital platforms and has become even more important after last week.

    The growing debate on the platforms’ choice to ban President Trump, given all the context leading up to this action, has seen folks arguing a few different ideas: 1) Facebook and Twitter see that Democrats will now have much more power in government, and so are suddenly trying to do the right thing to avoid regulation and other government intervention; 2) Donald Trump was using Twitter to incite an armed mob to attack the U.S. Capitol, putting lives and the peaceful transfer of power at risk, so it’s obvious that his accounts should be shut down; and 3) The fact that having a Facebook or Twitter account shut down or kept up has such an impact on our politics shows the power that these platforms have and what a problem that is.

    Lastly, folks are wondering what we even want in this situation. Do we want a system where private companies can make such important decisions regarding speech? Or do we want a system where messages that incite such incredibly destructive acts are able to appear on mainstream, dominant platforms? I’d argue we don’t have to choose between these two bad options.

    Section 230 of the Communications Act protects the platforms’ ability to moderate and suspend President Trump’s account, and that’s a very important feature of our law. At the same time, we need stronger competition policy to diminish the power of these companies. My hope is that with more competition, platforms will feel pressure to do more and better moderating of misinformation and harassing content. This doesn’t just mean enforcing antitrust laws but a more comprehensive regulatory plan that would actually promote competition against the platforms. 

    The particular example that folks are focused on this week — Twitter banning President Trump — is an imperfect one for the broader debate. Two special features should not color the solution moving forward too strongly. 1) President Trump is one of the most listened to people in the world, and not just because of his 88 million Twitter followers. The Twitter ban impacts him, certainly, but it doesn’t actually limit his ability to get his message out that severely. For a regular user without this kind of following, being banned from a mainstream, dominant platform would have a much larger impact. 2) Twitter is popular and does benefit from network effects, but it likely doesn’t fall into the category of gatekeeper power that comes not only from network effects but also from occupying an important gatekeeper role in the economy. 

    Making laws about how platforms should moderate content is hard

    The fundamental challenge here is that while many of us are not happy with how platforms are moderating their content, the government — the way our society has chosen to collectively act — is limited in its ability to restrain speech by First Amendment principles. This is an important value we as Americans share. The ability to speak out in ways the government may not appreciate is crucial to our democracy functioning. More broadly, we believe that free speech is a fundamental building block of political life. This goes beyond the First Amendment: Promoting free expression on platforms means that platforms need to make choices about what speech and what users to allow that we would not tolerate from the government. Platforms make mistakes all the time, but banning or limiting President Trump’s access is long overdue. But, just because we want platforms to make choices about speech that we don’t let the government make, doesn’t mean that we can’t expect safeguards. It is perfectly reasonable to expect due process, transparency, and consistency from platforms as they exercise their editorial discretion. But the most fundamental safeguards are competition and choice, so that no one platform has too much power.

    My colleague John Bergmayer has written extensively on the types of rules around content moderation that might be appropriate, including which types of services should be included (edge providers like social networks, not basic infrastructure like internet service providers) and how to provide transparency and meaningful appeal rights to users. Here’s a great example to get you started: “Due Process for Content Moderation Doesn’t Mean ‘Only Do Things I Agree With’.”

    Private actors rather than the government are where decisions about which types of content are amplified should be made. And to be clear, that’s the decision that social media platforms are making — which messages should be amplified, not which messages can be spoken or heard at all. When private companies make these decisions, a person can, in theory, go elsewhere to speak their mind. When the government makes this decision, a person doesn’t have that option. The problem is that some of the key private companies currently making content moderation decisions are giant vertically integrated mega-platforms like Facebook and Google, where it doesn’t feel like a person can go elsewhere and actually be heard.

    That’s why we need a less consolidated market for these social media platforms. We need competition and consumer choice. This would help the content moderation decisions by these companies have less of an impact, since alternatives making the right call would be available. And it would help our public pressure on them to remove harmful content to have more of an impact, since angry users could more easily leave a platform. The fact that there are just a few incredibly powerful companies running the platforms where we discuss politics is a policy choice we are making by inaction, not an immutable feature of the market.

    Pro-competition policies are a potential tool for improving social media content moderation

    People and policymakers interested in improving platform content moderation should include competition-focused policies in their toolkit so that users can choose the platform that moderates content in the way they prefer. Today, users, content creators, and advertisers are compelled to use Facebook because of its large network. They want to be where the people are, and the people are on Facebook. If we can change our government policies so that there is real competition against Facebook, all these categories of users will finally be able to choose platforms with policies they prefer, instead of having to choose the platform with the largest network.

    Today, users mostly choose their social networks based on who is on the platform — they need to be on a platform with their friends, content creators they like, and sometimes with institutions and businesses with which they need to connect. Interoperability requirements, one of the key pro-competition policies I’ve been advocating we should apply to dominant digital platforms, would allow users to communicate across platforms so they wouldn’t have to choose the platform with their friends anymore. Instead, they could choose the platform with qualities they prefer, and that might include moderating content responsibly. After seeing that inaccurate election-related messages, far-right extremism, and other messages spread using Facebook’s engagement-based algorithm played an important role in an armed insurrection at the U.S. Capitol, users might want to leave Facebook for an alternative platform that is more responsible.

    Advertisers too have concerns about how Facebook moderates its platform, as shown by the #StopHateForProfit campaign from this summer. In a competitive marketplace, these advertisers too could follow the users they’re interested in reaching to the type of platform that attracts them, while keeping their brands away from toxic content.

    With interoperability requirements, content creators could choose a platform that provides a lot of choices to protect against harassment or other toxic speech, while still cross-posting their content to the dominant platform. Content creators would also be less distressed by a take-down or account suspension, because they would have alternatives available to them. Of course, those alternative platforms might not be as popular. Extremist content having access to mainstream audiences through a general purpose social network is a huge problem because it supports radicalization and recruitment to extremist groups. Facebook acknowledged in internal documents that its own recommendation algorithm was responsible for more than half of the people that chose to join extremist groups on the network. Thus, moving that extremist content off mainstream platforms provides a significant benefit. People who are not yet radicalized may not choose to even visit a social network that is known to be full of toxic content such as Parler. Conversely, some may choose to go to platforms, no matter how small, that reinforce their existing beliefs, in order to be with their tribe and “get their fix of identity-confirming news.

    Other tools are also needed

    There are important ways in which competition policy is not a complete solution to all the concerns about content moderation. Most importantly, while competition often leads to diversity of consumer choices, it sometimes does not. My hope is that competition would foster a competitive environment where a diverse set of platforms are able to thrive; they try different content moderation policies and procedures and levels of investment, and consumers can choose the one they prefer, while still being able to communicate across platforms as they please. Non-dominant platforms may do better content moderation (and better privacy, etc.) and consumers can choose those platforms. Facebook may feel the competitive pressure and improve its approach.

    However, another potential result might be that most competing platforms adopt the same content moderation policies as Facebook and try to compete on other metrics. Maybe the same reason Facebook has chosen its often weak and ineffective content moderation policies — presumably that it is cheaper, and toxic content drives engagement — would also lead competitors to adopt the same policies. Or, if the consumers most valuable to advertisers prefer a heavier moderation hand, many social media competitors may target those consumers by having strict moderation policies, and we’ll have few or no competitors offering more lax policies. It’s hard to predict which of these outcomes is most likely.

    Another important limitation of a competition policy solution is that it doesn’t actually stop the worst types of content. Hateful and inaccurate messages could still be posted. But mainstream platforms like Facebook would finally have an incentive to be attentive to their users’ preferences and remove, label, or slow the amplification of those messages.

    We should have other rules that provide some baseline for social networks that choose to do very little content moderation, but there are real limitations on what those rules can and should be in order to protect the right to free speech. Providing users the power of choice to avoid platforms that deal in this toxic content is one of the best tools that is consistent with our free speech ideals.

    Practical considerations for implementing these competition policies in a way that promotes content moderation

    If we want pro-competition policies to promote competition in content moderation, there are some important features we need to include in those policies. Interoperability and non-discrimination requirements are the two pro-competition regulatory tools I think are most important to really changing the business model of these dominant digital platforms on an ongoing basis. If we hope to see competition in content moderation policies and strategies, we need to clearly protect the platforms’ ability to moderate content when we are setting up those pro-competition regulatory tools. That means we must make sure that interoperability and non-discrimination requirements don’t require a mainstream or high-moderation platform to carry content that doesn’t meet its chosen content moderation requirements.

    Cross-posting across platforms is an important interoperability capability that dominant platforms should be required to make available. But, if a user cross-posts something that violates a platform’s policies, the platform should still have the same rights to moderate that content. This could take the form of labeling, limiting the sharing or algorithmic amplification of the content, removing the content from the platform, or any other form of content moderation. A platform should not be obligated to leave a cross-post up on their platform if it violates their content moderation policies. At the same time, a dominant platform must not be able to use content moderation concerns as a pretext for banning an entire platform, when in reality their major concern might be competition from that platform. These types of disputes would be best dealt with on a case-by-case basis, such as an agency adjudication process, where expertise can be developed over time.

    Similarly, non-discrimination requirements must also allow for content moderation decisions, like demoting or otherwise altering the algorithmic amplification of content that runs afoul of the policies. In some cases, it might make sense for a platform to ban an account that has violated the content moderation policies. An account that gets banned might argue that it was banned for competitive reasons rather than content-related ones. Such a dispute might be resolved over the question of whether the account was really a competitor or potential competitor to the platform, as many content creator accounts are not. Again, the details of how to resolve such disputes would be best decided in agency adjudications.

    Going forward

    For me, processing the dizzying events of last week means figuring out what changes are needed and how I can support them. Advocates and policymakers concerned about how the platforms moderate content should incorporate pro-competition policies like interoperability and non-discrimination into their work. The public should also get involved: To start, contact your members of Congress to call for a digital regulator here. Together with a reasonable baseline of responsibilities such as due process-style protections and responsibility for content with which they actually have a financial relationship, providing social media users the power to vote with our feet for content moderation policies we support will also lead to a better information ecosystem. We don’t need one social network that is everything to everyone. With interoperability we can still communicate broadly, while choosing a social network with harassment policies and misinformation policies that we think are best for us, and best for our society.