From AI to Antitrust: Navigating Tensions Between Innovation and Market Power

As digital platforms increasingly become producers of generative AI content, new anticompetitive challenges emerge.

There’s been a lot of concern from journalists, artists, and other producers of content that their work, collected and used to train artificial intelligence models like Adobe Firefly or ChatGPT, is being remixed and churned out without recognition or due compensation from AI providers. These concerns are only growing given the increased integration of generative AI features into platforms already controlling substantial portions of the digital ecosystem – such as search engines and content aggregation sites. The U.S. v. Google digital advertising trial is emblematic of this dynamic; if platforms are the engine driving profitability of the internet, ad money is the fuel. Google’s dominance in the online advertising marketplace gives it remarkable power to set the terms of how users engage with accessing information online, including access to creative content.

As digital platforms, particularly search engines, seem to be headed toward becoming producers of generative AI content rather than simply indexers or hosts for third-party content, the unequal power dynamic between platforms and digital market participants will likely worsen. The Federal Trade Commission is concerned about this development as it pertains to the intersection of copyright, AI-generated content, and consumer protection. Last October, the FTC submitted a controversial public comment to the U.S. Copyright Office outlining a series of risks it believes generative AI poses to consumers. Although it was well-meaning, the letter overlooked the perspectives of experts in fair use copyright. Many of these risks (such as “passing off” AI generated content as the work of a specific human artist) already have some remedies in the law. Others, such as remedies for the use of pirated data in training sets, are still being litigated. Likewise, members of Congress have taken notice, urging the FTC and Department of Justice to investigate whether Big Tech’s genAI systems might violate antitrust law due to how the systems exploit existing access to training on first-party content. 

Overall, dialogue about anticompetitive behavior in the AI space is important for both the future of the internet and the law. But we need to be clear about what conduct is – and is not – protected under existing law. 

Actions that are permissible under copyright law can still be anticompetitive under antitrust law. Leveraging copyrights to engage in anticompetitive conduct has a long history in the entertainment industry. Two of the largest music collective licensing organizations, ASCAP and BMI, are under consent decrees for doing exactly this. Today, we regularly see major rights holders use copyright protections to maintain or enhance their market power, or limit the availability of their works to competitors. This strategy, which is entirely legal under copyright law, nevertheless can (and does) harm competition and innovation in the market, stifling competitors and extending control over downstream markets. In fact, mergers or collaborations between companies that hold significant copyrights may raise significant concerns, especially given consolidation trends we are seeing in AI markets. While copyright law provides certain protections and rights, it does not prevent antitrust scrutiny if those rights are used in ways that harm competition. Regulatory bodies like the FTC may intervene if they determine that copyright practices negatively impact market dynamics.

Antitrust enforcement in AI markets must not unravel the tapestry of copyright standards that were carefully crafted to protect creators and foster innovation. Copyright protects creative expression; it does not protect facts. Facts, when removed from protected expression, are free for any and all to use without restriction. Everything from news reporting to media criticism, travel reporting to social media commentary, depends on this distinction. In the aforementioned letter from members of Congress to the FTC and DOJ, the members state that journalism and local news is under threat due to “some generative AI features [that] misappropriate third-party content and pass it off as novel content generated by the platform’s AI.” This remark, while well-intended, skates over critical nuances of copyright law. Facts about the world, like the news, cannot be “misappropriated” – they are deemed fair use under copyright law. Creative expression, on the other hand, can be “misappropriated;” we call that infringement. If copyright holders believe the creative expression contained in their work has been “misappropriated,” they can file a lawsuit for copyright infringement. Many creatives, authors, and publishers have already done so, and many of these cases are still progressing through the courts. Once resolved, these cases will offer clarity on how copyright applies to AI technologies. 

Disruptive emerging technologies like generative AI require an analysis of their impact on technology markets and, by extension, on their customers. But we can’t chart a path to healthier market competition and consumer protection by blurring the lines between fact and creative expression, which would erode copyright’s core user protections.

The pathway to a fair, competitive, and open internet is multi-sided. As AI is poised to disrupt digital platforms, it’s imperative that we start wrestling with the long-term risks generative AI poses to competition and all digital market participants, including creators. But while generative AI deserves careful examination to ensure fair competition among AI services, fundamental copyright issues – such as distinguishing lawful uses, like training on publicly available data, from infringement – should be prioritized according to existing copyright principles. Copyright law provides sufficient frameworks for this differentiation. If the DOJ Antitrust Division or the FTC chooses to investigate further, it should include a wide range of stakeholder perspectives including a broad range of copyright and antitrust experts, including civil society champions of open access to knowledge and information online.

Beyond this, it’s essential to employ a variety of methods that address the root causes of the problem – dominance through consolidated market power – rather than just its symptoms. One effective solution would be to establish a new regulatory agency dedicated to overseeing digital platforms. This agency would bring much-needed expertise to matters such as creating equitable advertising guidelines, ensuring that all players in the digital ad market have a fair chance to compete, and establishing rules to protect users. Additionally, it could more closely scrutinize the complex tactics that dominant platforms use to stifle competition and solidify their monopoly power. In conjunction with this, legislative measures should be implemented, including nondiscrimination laws that directly affect search functionalities and mandate against self-preferencing. These legislative solutions would help level the playing field, promoting a more competitive and innovative digital landscape.