Moderating Race on Platforms
Moderating Race on Platforms
Moderating Race on Platforms

    Get Involved Today

    In the early fall of 2019, Ryan Williams was driving out of a garage with his wife and child when he was allegedly called a racial epithet by his white neighbor and the neighbor’s daughter. When Williams got out of his car, the neighbor called the police, and as the police arrived, Williams, like many people of color, recorded the interaction on his phone. In one of the few descriptions of the incident, a perspective piece in the Washington Post, the video seems to show that the neighbor and the neighbor’s daughter acknowledged calling Williams the racial epithet and apologized. The police asked some questions, decided that the incident was not worth any more of their time, and left. News outlets covered the incident and, in light of what happened, Williams decided to write down his feelings about what he and his family went through and more broadly on race relations in the U.S. on Medium, a popular blog platform, and post his video on YouTube.

    In the Medium post, Williams talked about how the incident made him feel and also called out the neighbor by name for his actions. Currently, neither the Medium post, nor the YouTube video that Williams posted, are online on either platform as both were subsequently taken down by those respective platforms. What is at issue is that Williams found himself at the receiving end of seemingly-neutral content moderation policies that, like other content moderation policies, may disproportionately marginalize the voices of people of color. This is not an uncommon scenario. As Recode highlighted in August, “natural language processing AI — which is often proposed as a tool to objectively identify offensive language — can amplify the same biases that human beings have.” These findings are consistent with numerous studies and articles that have found that black speech is more heavily criticized than its white counterparts.

    This is one example of how the way that platforms enact their content moderation policies is having a disparate impact on communities of color. “Disparate impact,” a term the Supreme Court used in Griggs v. Duke Power Co., is a doctrine where a facially neutral policy that has a disproportionate or statistically significant impact on a protected class of persons is considered discriminatory. As Black Lives Matter or the Arab Spring showed the world, what is shared on social media can affect real-world change and how content is edited, whether by AI or by a human, can have a profound impact on the collective discourse. Williams’ post is an example of how racial justice, Section 230 of the Communications Decency Act, and content moderation decisions made by companies will now be intertwined for the foreseeable future.

    Section 230 and Content Moderation

    Section 230 of the Communications Decency Act allows platforms to moderate third-party content. (For a full history of Section 230, how the law works, and what the law really means, see our Section 230 blog series.) This is fundamentally a good thing. In an ideal world where there are multiple platforms vying for our screen time, superior content moderation policies and superior enforcement of those policies could be considered a market advantage. Different policies for different platforms could, if there were true competition in platforms, highlight a diverse set of voices much like magazines or newspapers highlight different voices in their content (the Wall Street Journal versus the Afro, for instance). However, because of the consolidation of platforms, there is limited competition, which makes the content moderation policies of large platforms all the more important. As highlighted by what happened to Williams, content moderation policies have a powerful impact on the conversations had over the internet.

    People on both ends of the political spectrum complain about bias by tech companies in content moderation. However, a study found that black people were one and a half times more likely to have their content flagged on Twitter than their white counterparts and the content was more than twice as likely to be flagged if it was written in African American Vernacular (AAV). This makes the fact that Williams’ post was taken down on Medium all the more interesting. To be clear, the arguments made here are not about the decision to take down the piece — they’re about the underlying policy that informs platform content moderation decisions, the decisions the platforms make, and the ripple effects they have. (This issue was also highlighted in John Bergmayer’s paper on dominant platforms’ responsibility to mandate due process protections to users.) As a point of juxtaposition, I will compare the content moderation policies of Medium to another company that has had content moderation issues in the past: Airbnb.

    Medium 

    Medium is a platform dedicated to bringing together writers, thinkers, and storytellers to “bring you the smartest takes on topics that matter.” Individuals can become members, write stories to be published on the platform, and promote content to readers who are interested in that specific topic area. These articles can be about almost anything, from insightful think pieces, to articles about historical events, to personal stories that people want to share with the world. The platform has a page of content rules that outline what can and cannot be posted about on their website. It states what happens if someone is found to have broken the rules of the platform: That user’s post is removed and the user’s account is suspended without notice until Medium can determine if it is in fact breaking the rules. If someone wants to appeal the takedown or deletion of their account, Medium provides an email address and says they will “consider all good faith efforts to appeal” with no further redress articulated.

    While the list is not exhaustive, Medium states that, “[i]n deciding whether someone has violated the rules, we will take into account things like newsworthiness, the context and nature of the posted information, the likelihood and severity of actual or potential harms, and applicable laws.” Neither the calculus of the decisions that Medium makes in its content policy nor the appeals process are transparent or straightforward. Medium says that it “will consider all good faith efforts to appeal,” without an elaboration on what those good faith efforts might include or what that “consideration” will entail. In deciding whether a post violates the rules, Medium says that it will look at “context, newsworthiness, and nature of the posted information and applicable privacy laws.” Without the article in question, it is hard to determine which of the 17 different rules Willams broke on Medium. However, based on a Washington Post article, we can extrapolate which rules Williams’ post likely broke — the ones against violating another user’s privacy, jeopardizing another user’s reputation, or — ultimately — against harassing another user.

    Medium rules state that the platform does not allow “doxing, which includes not only private or obscure personal information but also the aggregation of publicly available information to target, shame [emphasis added], blackmail, harass, intimidate, threaten, or endanger.” Medium also does not allow harassment, which includes “bullying, threatening, or shaming someone [emphasis added], or posting things likely to encourage others to do so.” One might argue that when Williams posted the name of the man who called him a racial epithet and posted the video of his interaction with him in front of the police, he was doxing or harassing him by shaming him publicly. While there is an argument to be made that a momentary slip of judgement should not be made permanent by the internet, there is an argument that the public shaming of prejudice is actually effective. Especially because one could argue using Medium’s own rules, the post could have very easily stayed up. Moreover, if one focuses on the newsworthiness of Williams’ post, multiple news outlets mentioned the neighbor by name before Williams’ post was taken down by Medium, all of which are still up at the time of this post’s publication.

    While under Section 230, Medium is free to make these kinds of editorial decisions on third-party content, it is the arbitrary nature of these decisions that is troublesome. Medium’s rules are neither clear nor articulable, and, as highlighted here, the policy does not outline clear or transparent processes for how content decisions are made. There are very limited appeals processes and the vagueness of the “good faith effort” provision gives both the moderators and the content creators little to go off of when making content decisions. While different platforms having different standards and policies is healthy, knowing is half the battle and making sure users know what kind of content is being taken down and why should be a standard practice for platforms.

    Airbnb

    As a point of comparison, another platform, home-sharing service Airbnb, has made its mission to change its content moderation policies to make the company’s services as safe and welcoming as they can. Airbnb is an online marketplace for arranging or offering lodging — primarily homestays — and tourism experiences internationally. Airbnb has had its own content moderation issues such as the tenor of reviews being left by users, misleading descriptions of listings, and discrimination in its internal messaging system by hosts — not to mention basic issues of user safety. In 2019, the company released a three-year report that outlined the ways in which Airbnb had diminished and continues to fight discrimination on its platform in various forms and fashions. Due to its dilligence in fighting discrimination on its platform, Airbnb received commendations from Congressional Black Caucus members and civil rights organizations. Overall, Airbnb has been successful in instituting a number of practices on its platform to mitigate bias, even if it has a lot of work ahead of it to address other issues on the platform.

    Airbnb’s successful policy changes were how it dealt with its own content moderation. Its changes included an anti-discrimination policy that is more detailed and robust than its more general content policy standards. Airbnb’s policies also transparently lay out the process it uses to determine which posts are potentially discriminatory, and sets that as the standard for its content moderation and takedown policy. Airbnb’s content moderation rules are in line with the Fair Housing Act’s anti-discrimination provisions. By focusing on what the real issue is with user content—the potential for users to feel unsafe or discriminated against unfairly—Airbnb was able to change the way users interact with the platform.

    Airbnb saw the potential for discrimination and bad actors on its platform and gave users a multi-page and clear-cut policy explaining what will be taken down and why the site is going to monitor and moderate the content that is posted. They have a few policy pages, but it did not take me long to find the answers I needed to know about why something might be removed from the website. Airbnb also has a clear appeals process and articulable standards whereby users can understand why their content has been taken down or flagged. These clear processes make it easier for both the platform and the user to understand why and how content is moderated. It enables the user to know what is expected of him or her when using the platform and specific grounds for appeal if they think the platform made the wrong decision. Clear rules in content moderation make it easier for the moderators to know what to take down, which provides clarity for content moderators and platforms when they are challenged about content or moderation practices.

    Underlying Issues

    Internet users may not often think about the content moderation policies for the platforms they use every day, and even less so about the ways in which those policies impact the lived experiences of other users. How content is moderated impacts what stories are told and who has the agency to tell them. Furthermore, there is currently a media environment where the voices of marginalized communities are consistently considered afterthoughts. When marginalized communities do find their ideas and stories told, they are often through “mainstream” publications, reporters, or authors who rarely share their perspectives or lived experience. This makes the content policies enacted by platforms so critical. Ultimately these platforms have the ability to amplify marginalized communities and their voices, and allow them to feel heard, respected, and treated with dignity. Platforms like Medium find themselves at the forefront of that debate with articles like the one authored by Williams.

    This is not to say that Medium is intentionally or unintentionally biased in its content moderation — Medium has an array of articles and posts from people of nearly every gender, race, religious group, and background. However, it is interesting that a person of color’s experience with prejudice is taken down while a proposal as to how to tolerate white nationalism is kept up. As is true in a variety of contexts, the more opaque the rules, the more they have a disparate impact on people of color. And, as was highlighted in a report by Civic Media, the way that some communities of color communicate has been flagged by “neutral” algorithms that target “toxic” language.

    Platforms carry a lot of responsibility — they become judge, jury, and executioner on speech in ways that make people feel anxious and that someone will inevitably disagree with. We need to look at the way that content is moderated, what best practices should be, and what accountability looks like. Williams’ article may have had some valid points, and Medium may have had some valid reasons for not allowing the article to remain on its platform, but it benefits the platform, the user, and the internet ecosystem as a whole to have a clear understanding of why the decisions that were made were made. I, for one, would have loved to read it. 

    I would like to thank Adonne Washington for her brilliance and help in developing this post.