Should Algorithms Be Regulated? Part 3: Evaluating Alternative Frameworks for Regulating Algorithms

This is the third in a series of blog posts from Public Knowledge examining the public policy implications of algorithmic decision-making.

This is the third in a series of blog posts from Public Knowledge examining the public policy implications of algorithmic decision-making. A first post clarified what algorithms are and aren’t and identified some basic principles for attempting to regulate algorithmic decision-making, especially as it relates to content distribution. The second post cataloged the harms that can arise from algorithmic decision-making, including (1) harms to safety and well-being; (2) harms to economic justice; and (3) harms to democratic participation. (We’ve also written in other forums about the harmful consequences of unregulated, unproven AI.) 

In this post and the accompanying policy framework, we seek to assess various theories of change and related policy frameworks for creating accountability for algorithmic decision-making. We describe each approach’s drawbacks and benefits and put forward Public Knowledge’s perspective. 

Spoiler alert: There is no single or simple solution for the complex questions our exploration has raised. As you’ll see in our framework, there are several potential approaches for trying to mitigate the myriad harms algorithmic decision-making about content can create through direct regulation of algorithms. But most of them face substantial hurdles, particularly those that entail or even approach the regulation of user-generated content – most of which is protected speech. The hurdles include constitutional challenges, patterns of Supreme Court jurisprudence, or narrow or ambiguous definitions that would compound the challenges of content moderation for both platforms and users. As a Federal Trade Commission report on combating online harms noted, “governments, platforms, and others must exercise great caution” and focus attention on a broad array of considerations before turning to regulation to mitigate online harms. 

The challenge of creating accountability for the harms associated with algorithmic decision-making is not for lack of effort on the part of advocates or policymakers. In the last year or so, accelerated by whistleblower revelations, many bills were introduced that target the advertising-based business model that motivates platforms to use algorithms to distribute content based primarily on a profit motive rather than the public interest. Academics, legal scholars, civil society groups, and others have assessed these proposals exhaustively and brought forward some of their own. We feel our greatest opportunity to add value to this discourse is not to duplicate that effort but to summarize it and present it in a way that can serve as an aid to policymakers. 

We evaluated eight alternative policy frameworks that have been proposed – usually in the form of actual legislation – for regulating algorithmic decision-making. Each one is predicated on a distinct theory of change. These policy frameworks include:

  1. Increasing platform accountability for certain forms of content. 
  2. Increasing platform accountability for amplifying any form of content via complex algorithms.
  3. Introducing platform requirements for transparency, choice, and due process in algorithmic content moderation.
  4. Introducing privacy regimes that disincentivize or outright ban certain forms of data collection and use. 
  5. Extending product safety regimes into algorithmic product design.
  6. Implementing a dedicated digital regulator with broad jurisdiction, strong enforcement, and rulemaking authority.
  7. Increasing platform accountability for the platforms’ ad-based business model. 
  8. Increasing the power of the user by expanding antitrust and competition policy.

After an evaluation of what proponents and opponents of each framework have to say, and evaluating each of the bills that embody each framework, we conclude that the optimal solution to mitigating the harms of algorithmic decision-making about content distribution entails a combination of strategies:

  • Expanding antitrust and competition policy and strengthening enforcement to increase consumer choice and reduce the dominant platforms’ potential for harm.
  • Requiring transparency into platforms’ algorithmic design and outcomes as a component of better regulation and informed consumer choice, and creating accountability for platforms’ enforcement of their own policies. 
  • Passing national privacy legislation that prohibits the worst abuses of the platforms’ data-based business models and makes some forms of harms difficult or impossible to achieve. 
  • Creating or designating a dedicated digital regulator with the specialized expertise and agility to keep up with the pace of innovation and create integrated solutions integrating research, risk assessment, technical standards, auditing and transparency, rulemaking, and enforcement.

For clarity, we believe it is the role of Congress to pass legislation to address aspects of the platforms’ business model that have the potential for harm. It is within their authority and very much their tradition to rein in the anticompetitive or harmful practices of specific industries. This set of recommendations reflects our continued belief that models that call for direct regulation of algorithmic amplification – whether it’s the speech itself or the algorithms that distribute it – simply wouldn’t work or would lead to bad results. These models of direct regulation are also incompatible with principles of free expression and the long history of research showing that over-aggressive content moderation – which most of these bills would induce – has the greatest negative impact on traditionally marginalized communities. We’ve written about these complex issues before, including here (a post about the benefits of pro-competition policies like interoperability and non-discrimination); here (a post about the importance of due process in content moderation); and here (a post about Section 230 and how to protect free expression online).

Our recommendations also reflect our belief – and our experience, which we elaborate on in this paper – that antitrust enforcers and a regulatory agency with specialized jurisdiction, each working independently in pursuit of its own defined mission, can produce substantial benefits for society.

A few notes on our method of evaluation: With a few exceptions, we disregarded as a factor the rigor with which digital platforms may lobby or litigate against the legislation. We can assume it’s high in all cases. Conversely, we disregarded as a factor the rigor with which digital platforms may lobby in favor of each framework, since we believe these efforts to be somewhat cynical and focused on deterrence (we believe their real aim is maintenance of the unregulated status quo, or introducing regulatory hurdles that would be particularly challenging for upstart competitors to manage.) We also disregarded the degree of challenge each framework will face in gaining political adoption. We can assume it, too, is high in all cases. We can also assume that these factors are frequently shifting. We considered only lightly the degree of fit between frameworks proposed in the United States versus the European Union or other entities, though this may become a greater factor in the future as the Digital Markets Act and Digital Services Act move to enactment. We also excluded from our analysis use cases in which algorithmic decision-making that is itself discriminatory is already illegal, such as in categories like housing, employment, or credit based on protected classifications.

Of course, there are multiple pathways to achieve the intended outcome, and we have good things to say about several bills that represent steps toward the combination of policy solutions we recommend. The important thing is, it’s time to take strategic selective steps forward to address what we now know are the harms of algorithmic decision-making and the knowledge we now know platforms have about how they are caused.