Should Algorithms Be Regulated?
Part 2: Cataloging the Harms of Algorithmic Decision Making

This is the second in a series of blog posts examining the public policy implications of algorithmic decision-making. It aims to explore algorithmic harms to safety and well-being, economic justice, and democratic participation.

You can read Part 1 of this series, “Should Algorithms Be Regulated? An Approach to a Policy Framework from Public Knowledge,” here.

This is the second in a series of blog posts from Public Knowledge examining the public policy implications of algorithmic decision-making. We are responding in part to a series of legislative proposals over the past year or so targeted at the advertising-based business model that motivates platforms to use algorithms to distribute content based primarily on a profit motive rather than the public interest (e.g., Algorithmic Justice and Online Platform Transparency Act, the Ban Surveillance Ads Act, the Algorithmic Accountability Act, and the Social Media Nudge Act). In order to create appropriate policy solutions, we have to understand the nature of the harms that can arise from algorithmic decision-making about content, including ​​ad targeting and delivery, content moderation, and content ranking and recommendation systems. This blog post catalogs what we have learned about algorithmic harms from these systems in three broad categories: (1) harms to safety and well-being; (2) harms to economic justice; and (3) harms to democratic participation. One important theme that unites all the harms across these categories is that their impact is too often concentrated on historically disadvantaged communities.

We acknowledge at the outset that there are other, and in some cases more virulent, harms associated with algorithmic decision-making that are not focused on content distribution, including harms that arise from facial recognition, deepfake technologies, law enforcement access to social media, technologies used in criminal justice laws, employment, rental housing and real estate buying, credit and lending, and health care, to name a few. We also don’t mean to suggest that the harms we will catalog arise exclusively from algorithmic decision-making. In fact, we hope algorithmic decision-making could one day mitigate cultural biases and disparate outcomes that arise from human decision-making (predicated on adequate reform of systemic inequities).

Additionally, our catalog of harms associated with platforms’ content distribution is by no means exhaustive, or mutually exclusive. There are complicated intersections and relationships among them and policy solutions, which we will discuss in later posts, need to reflect those intertwined complications and relationships. And lastly, any proposal we support will be consistent with the First Amendment, which gives full protection to a wide range of “harmful” speech. At Public Knowledge, we are not proposing to ban any kind of legally-protected speech. However, we encourage platforms to create policies and digital tools to minimize harms. We also recognize that social media companies, advocates, and government stakeholders disagree about what constitutes “harmful” speech, based on different personal and community values, and perspectives.

Harms to Safety and Well-Being

We use a broad definition of safety and well-being that encompasses personal dignity, privacy, and autonomy as well as harms to personal and public health. These harms can be fostered through algorithmic decision-making in multiple ways. For example, in ad targeting and delivery, algorithms use behavioral data compiled across web pages and applications to create proxy identities for targeting content – identities that may or may not be truly representative, and which may result in discrimination, manipulation, or exploitation. Algorithms used to automate content moderation help disseminate if not amplify harassment and hate speech, including offensive name-calling (cyberbullying), verbal abuse, stalking, purposeful embarrassment, physical threats, harassment, doxing, or non-consensual distribution of sexual images. Some of these categories of content require context and subjective understanding of language and culture in order to determine the meaning of a word, image, or video, whereas algorithms generally rely on specific terms (for moderation of words or phrases) or hashes (for moderation of images and videos). Additionally, companies often change the parameters for these categories of content in response to real-world events, which means they may be working with limited or stale data sets. Bad actors seeking to create these kinds of harms can optimize the content to take advantage of the algorithms even without knowing the code. The opacity, potential for flawed data and feedback loops, lack of regulation, and scale of algorithms compound their impact relative to human moderation.

Again, these harms are disproportionately likely to impact historically disadvantaged groups. Over half of Black Americans and almost half of Latine Americans say they have experienced racially motivated online harassment. Similarly, half of lesbian, gay, and bisexual respondents say they have been harassed online due to their gender or sexual orientation. Platforms choose to ignore or suppress research showing their content moderation systems may be discriminatory, and allegations of racist, sexist, and queerphobic intimidation from other users go largely un-investigated and unresolved.

These harms to safety and well-being are far from theoretical or abstract. In 2020, COVID-19 misinformation provided a vivid proof point of how algorithmic information distribution can lead to real-world harms to personal and public health. Within days of the first cases, the World Health Organization warned, “We’re not just fighting an epidemic; we’re fighting an infodemic. Fake news spreads faster and more easily than the virus, and it is just as dangerous.” The evolving nature of medical science, with new research leading to new conclusions, made the definition of misinformation more fluid, playing to a weakness of algorithms for reviewing and moderating content. Human moderators were sent home, and every platform correctly reminded users of the risk of over- or under-moderation of content by algorithmic decision-making. Platforms had unequal content moderation support and capabilities for non-English content, which meant that certain communities were even more vulnerable. Numerous online ads claiming to sell verified prevention tools and cures for the coronavirus circulated widely; research has indicated that communities of color and other marginalized groups may be especially susceptible to such campaigns.

COVID-19 quickly became a vivid and lasting example of how misinformation swirling on digital platforms – which now extends to treatments, public health mandates, and vaccines – can cause significant real-world harms. Sometimes, the harm is to personal or public health (for example, if people act on misinformation related to the epidemiology of a disease or the efficacy of a treatment). But disinformation can also create societal harms, including increasing panic; threats to physical safety; limiting the effectiveness of official and institutional efforts to address public health issues; sowing mistrust, division, and polarization; and fostering racism and discrimination.

That’s because, as Public Knowledge pointed out in a blog post early in the pandemic, this kind of disinformation is often spread by many of the same peddlers of political disinformation, for many of the same reasons: to divide communities, sow mistrust, undermine faith in institutions, and exert political control. Some of the actors pushing disinformation about the vaccine are targeting it at the very groups that are suffering the most from the pandemic, and even pitting demographic groups against each other. For example, social media companies have failed to adequately address both politically-motivated anti-vaccine propaganda injected into Black communities and viral intra-community conspiracy campaigns touting Black hereditary immunity. These two variants of COVID-19 misinformation have converged to reinforce medical mistrust in the Black community, informed from centuries of medical experimentation and medical exploitation, and accelerate disproportionate Black transmission, sickness, and death from COVID-19.

Harms to Economic Justice

Economic justice refers to how economic opportunities, resources, and benefits are distributed fairly and equitably. As Rebecca Slaughter, Commissioner for the Federal Trade Commission, set out in an instructive law review article, algorithms can facilitate economic injustice in three ways: (1) by facilitating proxy discrimination (the use of facially neutral proxies to target people based on protected characteristics); (2) by enabling and motivating surveillance capitalism; and (3) by inhibiting competition in markets and enhancing dominant players’ market power. These algorithms can be particularly harmful when applied to areas with high economic stakes, like advertising on social media platforms for employment opportunities, credit products, healthcare solutions, real estate offerings, and housing opportunities. They can have the effect of exacerbating existing inequalities and injustices and embedding some of our society’s most persistent and pernicious stereotypes, threatening opportunities for economic advancement. Each can be the result of a variety of flaws in design, testing, application, and/or monitoring of outcomes of algorithmic decision-making by platforms.

By allowing algorithmic creation or targeting of content to people with protected characteristics, digital platforms play an outsized role in continuing to exclude communities of color and people with disabilities from economic opportunities, essentially automating a long history of economic exclusion. Facebook and Google – two dominant digital platforms – have admitted their role in automating inequity in economic opportunity by allowing ad purchasers to filter out “undesirable” audiences by conspicuous labels like age, gender, ZIP code, multicultural affinity, or options that relate to protected characteristics. (Subsequent investigations have shown they may still do so by proxy.) Targeted advertising to consumers with specific characteristics relevant to an advertiser’s offering has a long history in marketing, but algorithmically-targeted, individually-tailored ad delivery multiplies the potential for exclusion or exploitation.

The advertising-based business model of digital platforms requires and enables surveillance capitalism; that is, the use of machine learning algorithms to collect and process immense pools of consumer data, often in real time, in an effort to capture and monetize as much attention from as many people as possible. Again, communities of color tend to be disproportionately surveilled, often quite directly. Dominant digital platforms, along with powerful government agencies and other entities who broker in personal data, surveil the personal data of millions of Black, Latine, and immigrant communities. The FTC has recognized that unfair and deceptive practices and fraud have a disproportionately negative impact on communities of color.

Algorithms can also inhibit competition and enhance the power of dominant players in markets. They can be used to “personalize” prices, amplify or suppress content, accumulate and concentrate vast amounts of data in walled gardens, and allow manipulation or self-preferencing in search results. Social media companies don’t always show you what you want to see; they show you what they want you to see for their profit. While these are harms to competition and choice in markets in general, again, they have particular consequences to marginalized communities. For example, Facebook, Instagram, TikTok, and Google have all acknowledged that their products may help suppress, exploit, or underpay Black creators. Algorithms that claim to tap the power of the community to produce recommendations through collaborative filtering may reproduce cultural bias, decrease diversity, and make it harder for the work of creators of color to be discovered.

Harms to Democratic Participation

The January 6, 2021 assault on the U.S. Capitol illustrates how algorithms can amplify dangers to democratic participation, safety, and wellbeing. It followed at least 650,000 posts – on Facebook alone – attacking the integrity of the 2020 election and in many cases calling for executions and other violence. The impact of “The Big Lie” that the 2020 election was stolen from former President Donald Trump has also led to new laws in 19 states to curtail access to the voting booth. As we described in a blog post just after the election, these anti-democratic harms came largely from domestically-produced and amplified misinformation from partisan political actors, including the then-President of the United States. But platforms and their algorithms had been exploited by foreign actors to thwart diverse democratic participation, especially in Black and Latine communities, in every election since 2016.

Voting isn’t the only form of political participation that can be harmed by algorithmic decision-making. Communities of color have a long history of being over-surveilled, and social media has worsened the problem. Both public and private entities have systematically subjected communities of color to undignified and discriminatory surveillance online and in real life suppressing their constitutional right to peaceful protest. Evidence suggests police have exploited social media sites’ API vulnerabilities and automated suggestions about connections and community-based groups to associate individuals with disfavored political movements or generic ‘gang activity’ to justify investigating them, arresting them, or keeping them imprisoned. Civil rights and civil liberties groups have flagged the unreasonable federal surveillance of pro-community activists and the heavy-handed prosecutions of Black activists in the wake of the murders of Amhad Aubery, George Floyd, Breonna Taylor as dogwhistles intended to ‘chill’ legal Black political expression by criminailizing it.

Algorithms can also distort democratic participation by increasing polarization and encouraging political extremism. The concept of filter bubbles refers to when algorithms control what users see, promoting content and ideas that users have already engaged with. This increases the strength of people’s views and exacerbates political polarization; taken to an extreme, it means users live in their own world of “facts.” Algorithms may also recommend accounts and groups that lead users to ever-more-extreme content down ideological rabbit holes. Facebook has acknowledged its own research shows its algorithms aggravate polarization and tribalistic behavior. All of these dynamics contribute to mistrust in democratic institutions, in our political processes, and in each other.

Cataloging Harms to Frame Solutions

Our intent in doing this catalog review of harms is to help convey why there is increasing determination and urgency among policy makers to get underneath the platforms’ advertising-driven business models and use of algorithms and change the incentives that drive the policies, practices, corporate cultures, and coding priorities at the heart of those models. To ensure those changes hit their mark and address these and other harms, we will need to carefully assess the challenges inherent in the government regulating algorithms, which if not done correctly could inhibit constitutionally-protected speech. We will address some of these challenges in our next post.