Celebs: They’re Just Like Us! (Or Might Be, Under These New Anti-Deepfake Bills)

One concern that’s resurfaced in artificial intelligence policy discourse is the risk of AI as an impersonation tool. This has been of particular concern for celebrities, but there’s plenty to worry about for non-celebrities, too.

One concern that’s resurfaced in artificial intelligence policy discourse is the risk of AI as an impersonation tool. This has been of particular concern for celebrities, but there’s plenty to worry about for non-celebrities, too. Despite this kind of tool grabbing headlines for imitating creative artists, the underlying problem isn’t about infringing copyright (because you don’t have a copyright in your own likeness) – it’s actually about an AI potentially using someone’s name, image, or likeness to impersonate that person and cause them reputational or financial harm.

(Some) states have “right of publicity” laws that allow (some) individuals to control (some) uses of their likeness. But these laws aren’t hugely helpful to most people, because they (a) are only available to folks in certain states; (b) don’t protect against non-commercial uses or impersonations; and (c) are designed to protect celebrities and public figures, whose likenesses are already commercially valuable. If you’re not a public figure, or you live in a state without some kind of publicity rights framework, you have no recourse. 

Congress is trying to change this. Congresswoman Maria Salazar recently introduced the No AI FRAUD Act; late last year, Senator Chris Coons released discussion text of his proposed NO FAKES Act. Both try to expand the right-of-publicity framework to cover private citizens, and the results are mixed.

The traditional right of publicity framework is an economic right. If you can make money off licensing your likeness (including your voice), and someone uses your likeness without paying you, then doing so deprives you of money, and the unauthorized user must compensate you. (As a practical matter, this is usually structured as a property right – a snarl discussed more below.) Protecting against these kinds of unauthorized uses is a top priority not only for celebrities and other well-known individuals, but also for people such as actors and singers whose livelihood depends on their unique look or sound.

But most people outside of these fields aren’t worried about AI being used to funnel away revenue. A more realistic threat profile for many people includes their image being used to nonconsensually generate pornography, deceive employers or the public, or otherwise threaten or humiliate them. The risk is particularly high for women and people of color, as well as other marginalized identities.

Using an economic framework to try and also protect against these non-economic harms is a stretch. Both bills go about this by designing themselves as new kinds of property rights, akin to copyrights and trademarks, to allow control over the use of their images. The main reason behind this is because (a) traditional right-of-publicity laws are considered intellectual property, so it’s a familiar framework; (b) proponents want to preserve the ability of people in the “professional” category discussed above to be able to license out their likenesses to studios, record labels, and other third parties; and (c) these proponents also want to be able to license (or prevent unauthorized use) even after that actor or singer has died. 

This is a dangerous starting place. Intellectual property rights come with a massive amount of common law and statutory baggage. Because IP is fundamentally a speech-regulating regime (if you don’t believe me, you can take the Supreme Court’s word for it), it requires massive contortions and carve-outs to comply with the First Amendment. Both bills try to side-step this by some combination of listing explicit cases in which an unauthorized AI replica is defensible (such as using a person’s likeness in a biopic where they are the subject, or criticism and commentary on current events), along with general but self-obvious statements like, “The First Amendment is a defense.” While I can’t blame anyone for wanting to avoid the inevitably tangled jurisprudence that comes with accommodating the First Amendment, it’s a fool’s errand; neither bill’s list of exceptions encompasses the full realm of protected speech, and the No AI FRAUD Act even attempts to lay out a ham-handed test that mostly asks variations of, “How much money is at stake?”

Moreover, IP licenses are, by default, open-ended in duration and all-purpose in scope; in other words, the likelihood is that this would just create a new line in every record label contract, requiring the recording artist to sign over their likeness to the label for any commercial use, for the full duration of this “property right.” There needs to be a mechanism to ensure that this doesn’t happen, and that individuals automatically regain the rights to their likeness after a reasonable amount of time. “Termination” schemes, where the onus is on the individuals to jump through administrative hoops to secure their rights, end up being a scrambled mess that’s easily abused by the major licensees. An automatic reversion – after, say, 10 years – or a “use it or lose it” provision requiring the reversion after three years of non-use would ensure that professionals retain the benefit of their image rights. 

And the length of the term itself is another sticking point. There should be some term of protection after death; the idea of CGI duplicates popping up before a celebrity’s body is even cold is grotesque. But the NO FAKES Act pegs the term of protection to 70 years after the life of the person in question. The overwhelming majority of people won’t need anything approaching that. (No AI FRAUD is set at a more reasonable 10 years after death, along with some confusing language that may indicate a longer, or possibly shorter, term, depending on how you read it.)  

Using a property law framing is, frankly, a bad idea. Congress is designing a new law from the ground up – and, in many ways, they’re trying to shirk some of the baggage (such as fair use) of existing IP law. If they want a clean slate, there are plenty of alternatives to consider: Congress could focus on the harms they are trying to prevent directly by using a tort-style regime; they could frame it as a privacy right. There is nothing in the goals of these bills that requires them to use a property rights or IP framework as a starting point.

Liability for platforms and services is also a mess. No AI FRAUD explicitly targets what it calls “AI cloning technology,” and makes it an offense merely to offer this technology in interstate commerce. This is unworkable for a whole number of reasons (not least of which is that it would ban things like Apple’s accessibility-focused voice cloning tool, and most existing movie CGI technology). Both bills imply, without clarification, that any website onto which a user uploads an unauthorized digital replica might also be liable for that replica. Characterizing a new right as “intellectual property” also means that Section 230 of the Communications Act would not apply to services (for example, social media platforms) hosting material that infringes that right. But there would be no way for a platform to know whether material does or does not infringe, and this would make services less likely to host user-submitted material at all, and could lead to over-broad moderation policies. (This applies to 230 and copyright as well, but copyright has a separate liability shield for platforms: the Digital Millennium Copyright Act.)

The most obvious (and workable) answer for liability – and as a copyright lawyer, I say this begrudgingly – seems to be a notice-and-takedown system à la the DMCA. But we have already seen the scope of abuse that “regular” DMCA notice-and-takedown prompts; what can we expect from a system that requires removal of a work whenever someone in it claims that they’re being impersonated? Embarrassing videos of public figures, videos or audio of police misconduct, and news footage could all be memory-holed with a single bad-faith claim – especially since public figures are already getting in the habit of blaming anything they don’t like as “AI generated.” This is a place where lawmakers need to learn the lessons of the DMCA, and include strong, automatic civil penalties for abusive or inappropriate takedowns. 

To be clear: There’s some unalloyed good in these bills. The No AI FRAUD Act explicitly lists unauthorized use in child sexual abuse material, sexually explicit imagery, and “intimate images” as per se harms subject to significant damages. But there’s still a lot of work to be done, from moving away from an IP framework to thinking through issues of abuse. The people deserve something – and these bills, as they stand, aren’t it.