Digital Replicas, Part I: Defining the Harms

In the first installation of this three-part series on AI-generated digital replicas, we take a look at the potential harms from unauthorized uses to democracy, to industry, and to the public at large.

In the first installation of this three-part series on AI-generated digital replicas, we take a look at the potential harms from unauthorized uses to democracy, to industry, and to the public at large. Read Part II here.

On July 31, the Copyright Office published the first part of its long-awaited report on artificial intelligence and copyright. The report focuses on digital replicas, describing a replica as “a video, image, or audio recording that has been digitally created or manipulated to realistically but falsely depict an individual.” Breakthroughs in AI tools have led to a sudden surge in digital replicas in many different forms, including examples that range from the dangerous (like creating convincing replicas of the President) and despicable (like the image-based sexual abuse faced publicly by Taylor Swift), to the inspiring (like the accessibility and inclusion benefits of video translation that preserves voices) and prosaic (like getting a group photo where everyone actually has their eyes open). While digital replicas can be made using any type of digital technology, and with or without an individual’s authorization, the flurry of attention is on unauthorized digital replicas created using generative artificial intelligence. Yet, that is not the only avenue of risk or potential harm created by this technology.

In its report, the Copyright Office concludes that there is an immediate need for federal legislation to address potential harms created by digital replicas. This isn’t breaking news; Congress also appears to see a need for action, with many pieces of proposed legislation already addressing different facets of digital replicas. At Public Knowledge, we’ve addressed these concerns by urging action on digital replicas for over a year, including through analysis of the applicability of existing law, how headline-grabbing moments reveal tensions in existing and proposed laws, and the need for policy solutions that protect everyone from a range of harms – instead of just catering to celebrity and entertainment industry concerns.

This post is the first in a three-part series. In Part I, instead of diving right into analysis of proposed legislative solutions, this post first steps back and establishes a framework for considering potential harms emerging from unauthorized AI-generated digital replicas. We will explore three key categories of harm – commercial, dignitary, and democratic – and highlight how these harms impact individuals, industries, and society at large. By examining these risks, we aim to provide a clear understanding of the challenges that arise from the misuse of digital replica technologies. In Part II, we will shift our focus to solutions, offering a set of guidelines and recommendations for legislative action to address these harms effectively, ensuring that the rights and dignity of all individuals are protected while fostering responsible innovation. Finally, in Part III, we will examine some of the proposed legislation, including the NO FAKES Act and DEFIANCE Act, and measure them against our guidelines.

Three Categories of Potential Harm

There are three categories of potential harm that can arise from digital replicas: commercial harm, dignitary harm, and democratic harm. Commercial harms primarily arise from violations of people’s right to control how their name, image, and likeness – often referred to as “NIL” – are all used commercially, but also includes the threat of potential economic displacement from digital replicas. Dignitary harms are violations of a person’s rights to privacy and respect, and to be free from harassment and abuse. Finally, democratic harms are those that harm our system of government and shared information environment, like disinformation.  

Commercial Harms

Digital replicas have the potential to disrupt existing industries, which brings both new opportunities and new threats. Therefore, it is unsurprising that one of the main drivers of the conversation around digital replicas is how they affect individuals who derive economic value from their likeness. Entertainers like actors and musicians – be they working professionals or big-time celebrities – are facing a host of challenges presented by the explosion in digital replication technology.

Entertainers are particularly concerned about labor displacement – losing out on paying jobs because the movie studio, record label, or other company decided to use a digital replica of an actor or artist instead of hiring the human creative worker. The SAG-AFTRA strike was driven in no small part by the concerns of actors at every level that they could be displaced by AI-generated replicas of themselves. This concern can be expanded further to include the potential that authorized digital replicas, created by the media companies from existing professionals, simply drives down the number of opportunities for everyone. And it’s not just screen actors; voice actors, musicians, and many other working professionals rely on their appearance, their voice, or some other aspect of their likeness to put food on the table. Indeed, it may be entertainers outside of industries with strong labor protections that are hit the hardest. For example, a massive amount of simple voiceover work could be completed with a small library of fully-authorized but inexpensive digital replicas, driving down the need to hire voice actors and the value of their work.

There is also the threat of unauthorized exploitation of a likeness for commercial gain. The wide availability of digital replica technology presents opportunities for unscrupulous people and companies to use digital replicas for endorsements, advertising, or in their own products without permission. These aren’t necessarily lost jobs for the person depicted, but they do represent possible economic harm in the unfairness and exploitation that can be a part of these unauthorized uses. Relatedly, unauthorized uses can also impact a person’s livelihood by tarnishing their reputation or personal brand. Congress has already heard testimony from high profile individuals about finding their faces and voices being used to advertise products that harm their reputation. Even simple dilution of an individual’s brand, or consumer exhaustion at seeing a flood of unauthorized and untrustworthy spokes-replicas, could spell disaster for individuals who previously relied on (or hoped for) sponsorships.

Meanwhile, other professionals and companies want a legal regime that makes it easier to commercially use digital replicas. Technology and media companies in particular favor an easy legal path to securing licenses for likenesses and using digital replicas in their products. A risk of such a permissive environment is the alienation and exploitation of individuals in commodifying their likenesses – it is easy to imagine how people could be tricked, pressured, or unfairly compensated for the right to use their NIL and wind up “stuck” with a bad deal that means someone else gets to exploit their digital replica as they wish. Recently there have been some examples of predatory and exploitative contracts to exploit artists, athletes, and others.

Dignitary Harms

The legal system has long recognized harms to personal dignity as worthy of protection. People have a right to protect their reputation, privacy, and emotional well-being. Courts allow actions for defamation, false light invasion of privacy, and intentional infliction of emotional distress. People have a right to defend themselves in civil court against these wrongs, not just because of economic or financial harm, but because of the inherent indignity of the wrongs themselves.

Digital replicas present a new vector for old, insidious problems. Digital replicas can be used to put reprehensible words into someone’s mouth, to create non-consensual intimate imagery (NCII), and to abuse, harass, and defame people. These harms are not speculative; they are already happening. People targeted include wealthy and powerful celebrities – like Taylor Swift – but also some of the most vulnerable among us – like kids and people with marginalized identities. Overwhelmingly, the victims are women and girls. 

The simplest version of this problem is having one’s reputation harmed through the creation of a digital replica with your voice or appearance that misrepresents one’s actions or beliefs. While this can have a commercial aspect, there is also undoubtedly a right to dignity involved as well. For example, there has been a surge in sketchy AI-generated deepfake advertisements for products like erectile dysfunction or bogus health supplements that exploit intimate stories shared online but there are also seemingly-state sponsored propaganda efforts that turn individuals into mouthpieces for authoritarian regimes. These harms go beyond defending one’s ability to make a buck, and go to the right to control one’s identity and reputation in public.

These harms can be even more vicious as well. While online harassment, bullying, and even image-based abuse are not new issues, AI-powered digital replicas make it easier and faster for bad actors to cause harm. In particular, the impact of non-consensual intimate content, such as deepfakes used in NCII, can be devastating to an individual’s mental health, dignity, and sense of safety. Although some high-profile individuals, such as celebrities, are more visible victims, the reality is that this harm can – and does – affect ordinary people too.

Democratic Harms

While commercial and dignitary harms focus on the individual, there are collective harms as well. Our democracy relies on a well-informed electorate and a trusted information environment. Both of those are already in significant decline. Add into that mix new technology that undermines truth in what we see or hear, and there are real harms at hand. Deceptive digital replicas – or simply the fear or possibility of them – are corrosive to the trust needed for everyone to share in a common understanding of the facts of the world. In brief, the democratic harms arise from the potential for digital replicas to create misinformation and intentional disinformation, increase the corrosive cynicism that degrades trust in our information ecosystem (and by extension, our democratic governance systems), and to create a backlash of censorship or false allegations of fraud.

Mis- and disinformation is one of the headline concerns about generative AI. And, like some of the other harms discussed, this is not merely speculative. Already, the 2024 election has seen instances of AI-generated disinformation, such as when a synthetic version of President Biden’s voice was used to discourage voters in New Hampshire from participating in the primary there. While the Federal Communications Commission was able to leap into action in that instance, there are many digital communication channels that do not have the benefit of regulatory oversight.

Instances like the one in New Hampshire are deeply troubling, but ultimately the greatest damage may actually be an increased distrust of the information ecosystem overall. In raising the alarm about the power of AI to “supercharge” disinformation, the media may ironically be reporting themselves out of a job. An increasingly cynical populace is being primed to simply disregard new information – and retreat further into their preconceived notions and narratives. Long before the rise of AI-generated digital replication, Putin’s Russia intentionally cultivated an atmosphere of cynicism, misbelief, and disillusionment as a mechanism of control. It is now easy to find instances of people heavily scrutinizing photos and videos on social media, commenting suspiciously that they suspect these images are AI-generated or manipulated. And politicians around the world are already starting to falsely claim that damaging or unflattering coverage of them is AI generated.

Finally, the last set of democratic harms resulting from digital replicas is interrelated with the above two. In counterreaction to the spread of disinformation, and in an effort to rebuild trust in the information environment, there is a real risk of a backlash of censorship and over-moderation online. If, out of fear of disinformation, platforms start aggressively removing or filtering content, this will damage free expression and the unprecedented free flow of information brought to us by the internet. In turn, this will also damage our democracy, and will likely disproportionately impact marginalized communities.

Conclusion
Like any new technology, AI-enabled digital replicas possess both promise and potential peril. AI can unlock new avenues of creativity and enable communication to be more seamless, accessible, and inclusive. But the advances in those same technologies bring new threats that span industries, our personal lives, and even the fabric of democratic society. From displacing creative professionals to violating individuals’ privacy and dignity, to spreading disinformation, the potential risks are varied and significant. In Part II of this series, we will explore solutions for mitigating these harms and offer guidelines for legislative action to solve these challenges.