In this second installation of a three-part series on digital replicas, we focus on solutions to effectively address the harms posed by unauthorized uses of this emerging technology. You can read Part I here.
In Part I, we outlined the key categories of harm posed by digital replicas: commercial, dignitary, and democratic. There is unlikely to be a single solution to address all of these issues. While digital replicas are an easy category to bundle together, it is plain to see that creating a new song using a digital replica of an artist’s voice, harassing someone using synthetic non-consensual intimate imagery (NCII), and putting words into the mouth of a sitting president each present vastly different policy considerations.
Effectively protecting our livelihoods, dignity, and democracy will require carefully balanced safeguards. While striking that balance is a challenge, there are good guidelines for effective action. In this second part, we will explore how policymakers, legal systems, and society at large can address these harms effectively, ensuring that individuals are protected from the new risks created by digital replicas .
Addressing Commercial Harms
Both the Copyright Office report and Public Knowledge’s writing on digital replicas explore the existing legal remedies for addressing commercial and economic harms. It needs to be repeated often: AI is not an exception to the law. People have been defending their right of publicity, trademarks, and even licensing their image for a long time. Computer-generated holographic concert performances from Tupac, Michael Jackson, Elvis, and many others were all possible under our existing system of contracts and NIL rights. New technology does not mean all of these tools go out the window.
Yet, the existence of remedies and legal tools does not mean we should glance over commercial concerns and leave them to existing law. The challenges posed by AI-powered digital replication are worth addressing both specifically and at a high level – it is why Public Knowledge has spoken consistently in support of creative workers in their strike negotiations. Yet, we also must recognize that working entertainment professionals, celebrities, and public figures have different economic needs, interests, and resources than most other people. As a result, policy solutions aimed at tackling commercial and economic harms need to be targeted and balanced carefully so that everyone is protected, not just the rich, famous, and well-connected.
Guidelines for Addressing Commercial Harms
- Use federal law to harmonize existing rights of publicity and NIL rights nationwide.
The increased ease of producing digital replicas means that these once-obscure causes of action are going to become increasingly common, and we would benefit from clear and harmonious rules across jurisdictions to make legal risks and remedies easier to navigate for everyone. Right now, every state has different rules, creating a complex legal mess to navigate as digital replicas become more common. Congress has the power to pass a federal law that preempts the tangle of conflicting statutes to create a simplified legal regime.
- Do not create new intellectual property rights, and instead rely on updating the existing laws and rules already in use. There are already tort, trademark, and contract law solutions for every existing commercial harm. By its nature as commercial, there are always going to be financial damages at stake, and there are often sophisticated professional parties involved, which makes these issues ripe for solutions in civil courts. Creating a new IP right opens the door to questions about scope, duration, applicability and intersection with existing law – not to mention constitutionality and the impact of free expression inherent in restricting speech through a new intellectual property right. The simpler, safer path is to model federal law on existing state laws which do not use property rights, or to update existing federal protections, such as through the Lanham Act which governs trademarks.
- Ensure there are protections to prevent Big Tech and media companies from exploiting people by getting them to sign away their NIL. When it comes to using someone’s likeness for commercial gain, there should be strong protections for individuals to prevent exploitation. Some professionals are already protected by unions or represented by lawyers, but we need guardrails and regulations to ensure that those without such representation are either afforded it or otherwise protected from being exploited. Including requirements for individuals to be represented by counsel in granting any license, enforcing strict term and usage limitations, and ensuring individuals always retain their right to speak in their own voice are all essential protections that should be included in new laws about licensing NIL rights.
Addressing Dignitary Harms
In Part I, we discussed how harms to individual dignity through damage to one’s reputation, privacy, and well-being are the harms that have the greatest impact on individuals and also those most likely to affect ordinary people. As a result, these harms ought to be centered in any legislative strategy for addressing digital replicas. Our solutions must also consider those most impacted by these harms: women, girls, youth, and people with marginalized identities. That means we need clear, accessible, and fair legal mechanisms to enable people to protect their reputation, dignity, and privacy. Our policies must enfranchise and empower everyone, including those without the time, resources, power, or expertise to navigate the legal system.
The unfortunate reality is that AI-enabled deepfakes have created new channels for abuse, harassment, and defamation online. AI did not create these problems, but it has accelerated the urgency with which they must be addressed. Yet, it is also important not to mistake the catalyst for the cause: When fighting those determined to abuse, harass, denigrate, and deceive, our solutions must primarily target their harmful actions. When making policies about the technology itself, understanding where to address potential harms in the development cycle of AI technologies is essential.
Guidelines for Addressing Dignitary Harms
- Prioritize protections that extend to everyone. Instances of harassment that affect celebrities – or particularly vulnerable groups like children – capture a lot of attention, but we need solutions that are broad, equitable, inclusive, and designed to protect everyone. Most people won’t ever commercialize their likeness, but everyone deserves to have their dignity and privacy protected.
- Ensure that synthetic representations cannot serve as loopholes in existing legal protections against harmful depictions of individuals. Some existing laws and regulations around defamation, invasion of privacy, and the like may have definitions that do not include or consider the potential for synthetic representations of a person’s likeness. We can and should close these loopholes or provide clarifications wherever gaps or ambiguities are to be found. The challenges of NCII and online harassment are not new, and people need to be protected whether they are victimized with AI-enabled content or not.
- Empower everyone to get non-consensual intimate imagery of themselves removed. The most effective remedy for harmful acts like the creation of NCII, is to limit its spread, reach, and impact. Notice and takedown systems or other related mechanisms, if properly designed, present an effective and accessible remedy. Platforms serve as powerful gatekeepers and should bear responsibility for detecting and removing harassing, abusive, and harmful content, in accordance with their own policies.
- Promote more equitable access to civil legal recourse, building on existing legal protections. Access to justice in the courts, or even to certain legal claims, is often gated by socio-economic status. It costs time and money to fight for justice in civil court, and some causes of action (reasons to sue) may be inaccessible to people unless they have significant commercial value to their reputation. Prevailing party provisions which allow people to recover their attorney’s fees if they win a case, updates to existing laws to allow more people a chance to get into court, and other changes could all improve the ability of an average person to use the judicial system for addressing dignitary harms.
- Balance the need for strong protections with the importance of preserving free expression. Laws and regulations should target conduct and be as narrowly tailored as possible. Overly restrictive policies designed to prevent one set of harms could inadvertently cause a whole new set of problems instead – like Big Tech-driven systems of private censorship or further damage to our information environment. For example, requirements for watermarking synthetic content could be used to create upload filters that chill and limit speech online or takedown systems weaponized through false claims.
- Regulations should be aimed at deployers and users, not at AI developers. Trying to restrict the possibility that tools are misused by imposing restrictions on AI systems themselves is a losing battle, even if it is constitutionally permissible. AI models are inherently general purpose systems, and the benefits of access to open models for the purpose of testing and transparency outweigh the marginal risks from misuse. Similarly, we want to hold Big Tech platforms accountable to their own policies, and to make use of their considerable resources to address these problems, but we also should seek policy solutions that place the responsibility for harmful actions with the bad actors that cause the harm.
Addressing Democratic Harms
The rise of digital replicas powered by AI presents a critical challenge to our democracy by supercharging the spread of disinformation and undermining public trust in the media and political institutions. AI-generated content, such as synthetic voices and videos, can be used to manipulate political messaging, as we’ve already seen in the 2024 election cycle. False representations of political figures or events distort the democratic process, misleading voters and creating an environment where even truthful information is viewed with skepticism. This growing distrust of the information ecosystem, fueled by AI-driven disinformation, poses a grave threat to our ability to have informed, meaningful public discourse – a cornerstone of democratic governance.
To mitigate these democratic harms, we need solutions that focus on promoting transparency while safeguarding free expression. As highlighted in our previous work on digital replicas, over-moderation and censorship are real risks that could emerge in the wake of an overreaction to disinformation. Rather than blanket restrictions, policymakers should focus on clear, narrowly defined rules for political content, particularly around disclosures in advertising and protections for electoral integrity. Technological solutions like content authentication tools, which track the provenance of media, offer a more promising approach than broad measures like watermarking, which can easily be abused. Furthermore, a dedicated digital platform regulator – which could exercise powers similar to the FCC in overseeing political advertising – could help ensure that online platforms are held accountable for the dissemination of disinformation within established constitutional norms, without stifling free expression. By implementing these commonsense protections, we can address democratic harms while fostering a healthy, open information environment.
Guidelines for Addressing Democratic Harms
- Focus on narrow, commonsense protections for our elections. There are well-established legal doctrines for how to require disclosures in political advertising, crack down on fraud, and protect the integrity of our elections. Given the sensitivity and urgency of protecting our democracy, it is important to stick to well-trodden, uncontroversial paths to ensure that protections can be put in place and upheld.
- Beware of the potential for over-moderation, censorship, and degraded privacy. Any policy proposal for tackling harms stemming from digital replicas should be evaluated carefully to ensure that the solutions will not result in over-enforcement or have collateral effects that will damage free expression or result in democratic harms.
- Consider the authentication and content provenance solutions that do not rely on watermarking synthetic content. Watermarking synthetic content is an often-discussed policy solution that merits additional research and investigation, but the technology and techniques being developed are not yet up to the task. We should also consider alternatively or simultaneously investing in solutions to confirm and track the authenticity of genuine content. Bolstering authentic content builds trust in factuality and truth, rather than fixating on rooting out fake and synthetic content.
- We need a dedicated digital platform regulator. The FCC has proven effective in dealing with digital replica-related disinformation already, and is also acting to require disclosures about AI-generated content in political advertising. We should have a similar regulator on the beat of our digital communications channels. There is a strong tradition of regulators promoting content in the public interest and providing measured, expert oversight that preserves and supports free expression.
- We need to invest in better news to rebuild a trusted and robust information environment. We need policy solutions to support diverse sources of credible news, including local news. We should promote quality sources of information rather than get bogged down in over-moderating content by fostering alternative business models, diverse and local ownership and representation, and models predicated on a public interest theory of news.
Stay tuned for Part III, where we will examine some of the proposed legislation, including the NO FAKES Act and DEFIANCE Act, and measure them against our guidelines.