ScarJo Oh No! Unraveling Scarlett Johansson’s Claims Against OpenAI

The drama around OpenAI using a Scarlett Johansson soundalike for its new digital assistant has caused a spirited new debate online.

The drama around OpenAI using a Scarlett Johansson soundalike for its new digital assistant – a move arguably designed to evoke the actress’s role as, uh, an AI assistant in Spike Jonze’s sci-fi movie, “Her”has led to a renewed debate about what rights public figures have in their likenesses, voices, and other hallmarks of their identities.

Public figures like Scarlett Johansson rely on rights of publicity – also known as name, image, and likeness rights, or NIL – to control the use of their identity. NIL rights exist as a state-by-state patchwork; some states have expansive rights, while others offer limited or no protection. The traditional rationale for NIL rights goes something like this: You, as a public figure, have a likeness that is commercially valuable to many people. Companies think that your apparent endorsement will boost sales, attract customers, or otherwise generate value for them. It doesn’t matter that the internal logic may seem tenuous (John Cena wearing crocs), or outright silly (Snoop Dogg hawking Hot Pockets, anyone?). The company wants to use something identifiably connected to your identity to sell a product. NIL rights provide a remedy in situations where that identifiable likeness is used without approval. This includes soundalikes – an application of the law that has had some of its own very entertaining fact patterns. 

So this seems like it could be an open-and-shut case for Johansson, right? Not quite.

There are a few important things to remember. First, in order for NIL rights to apply, the part of her likeness that’s used must be identifiable as her. Not every celebrity is famous in the same way. For example, I couldn’t identify Sydney Sweeney by voice alone, and although Patrick Warburton has voiced some of the most memorable characters in animation, I couldn’t pick his face out of a lineup. In this case, Johansson claims OpenAI copied her voice, which is less intimately linked with her identity than her face. Whether or not an average person would hear the voice of OpenAI’s digital assistant, “Sky,” and think, “That’s Scarlett Johansson!” is not open-and-shut; it’s a question for a jury. 

Second, if anything, the developers at OpenAI weren’t striving to evoke Johansson; they were, more specifically, trying to copycat Samantha, Johansson’s character in “Her.” This may seem like a distinction without a difference, but legally, it is important. It may help to think about what this would look like in other contexts. For example, if an NYPD patrol bot suddenly started barking orders in a simulation of Kevin Conroy’s dulcet baritone, it would not be an attempt to evoke Kevin Conroy; it would be an attempt to evoke Batman, the character he played for much of the early 2000s. That means the relevant claim may not be an NIL violation; it might be a copyright violation for creation of an unauthorized derivative work. Yes, that’s right; OpenAI may have created a (potentially very expensive) Spike Jonze fanfic. 

If this seems somewhat silly and convoluted, that’s because it is. AI is overhyped in many areas, but it is remarkably effective at finding the pressure points in our legal system. That is not to say that there aren’t legal solutions here: Traditional NIL rights are still enforceable whether AI is involved or not. 

But high-profile controversies like this – and the perception of their novelty and complexity – contribute to the mounting pressure for federal legislation to address AI-enabled impersonation. As I’ve previously pointed out, this is a particular concern for celebrities like Johansson, but there is plenty to worry about for non-celebrities, too.

Striking a balance here is tricky. There is a lot that could go wrong in passing laws that expand NIL rights to cover both the commercial concerns of celebrities and working creatives, and also the risks to personal privacy and dignity that may be of greater concern for private individuals. But if our lawmakers are going to step up to empower people to address potential harms enabled by AI, it seems important to make sure that those laws are working to protect everyone and not just the people who make headlines.