In this third installment of a three-part series on digital replicas, we examine proposed legislation for addressing potential harms of these technologies, and measure it against our guidelines. You can read Parts I and II here.
Near the end of the 118th Congress, we introduced a framework and recommendations for legislation addressing digital replicas in a two-part series. In Part I, we established a framework of potential harms stemming from unauthorized AI-generated digital replicas. In brief, there are potential commercial, dignitary, and democratic harms presented by new technologies that facilitate the creation of highly-realistic digital replicas. In Part II, we built on this framework with a set of recommendations and guidelines for tackling these harms—and ensuring that the solutions did not create new harms themselves.
Last week, the first bill that directly addresses digital replicas was passed: The Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks (or TAKE IT DOWN) Act. It swept rapidly through the Republican-controlled Congress, buoyed by an endorsement from Melania Trump, with overwhelming support from Democrats despite flaws pointed out by civil society and Democrats on the Energy and Commerce Committee. It is a flawed first step, but isn’t the only legislation up for consideration. There are a few other bills that have momentum in Congress too, and a few more on the way, so in this post I turn to evaluating some of those bills using our framework of harms and our recommendations.
We undoubtedly need legislative solutions that address the full range of commercial, dignitary, and democratic harms created by unauthorized digital replicas, yet the spread of existing legislation fails to rise to this challenge. The proposed laws overemphasize protecting the commercial interests of Big Tech and powerful media interests while further harming small and independent creators. They do take seriously threats to dignity and the harms facing victims of non-consensual intimate imagery (NCII) abuse, but fail to implement even the minimal, commonsense protections needed to prevent censorship and protect us from further democratic harm.
We believe there is a path to protecting commerce, dignity, and democracy from the harmful effects of digital replicas. Let’s dive into each of the bills and see how they try to address each kind of harm and stack up alongside our recommendations.
The TAKE IT DOWN Act
Let’s start with the TAKE IT DOWN Act, since this bill raced through this Congress and has already been approved. The TAKE IT DOWN Act focuses specifically on the issue of NCII and the serious dignitary harms resulting from it. It creates criminal penalties for the distribution of NCII and requires platforms to have a notice and takedown system for NCII, with failure to establish a system or to use it enforced by the Federal Trade Commission. The strength of this bill is its focus on the critical, high-impact dignitary harms and its targeted and simple solution. However, any takedown system must be considered in light of its potential for abuse, lest it become a tool for censorship or a limitation on free expression. As a result, we joined other experts in voicing our concerns about this bill, and remain vigilant about how it will be implemented. The TAKE IT DOWN Act focuses on a critical set of harms, but has a few critical—and hopefully still addressable—flaws.
Any notice and takedown system gives platforms strong incentives to remove content, and can be used by bad actors to quash speech online, and so requires well-constructed safeguards. The TAKE IT DOWN Act is narrowly targeted at sexualized imagery, which makes the theoretical potential for misapplication smaller, but that doesn’t mean much if there are no mechanisms to hold platforms and takedown requesters accountable, or to ensure improperly removed content can be expeditiously restored. Experts have pointed out that the new law will create incredible time-pressure and incentive for platforms to over-moderate content, and impose onerous content monitoring requirements that smaller platforms will struggle to keep up with. The TAKE IT DOWN Act should have added enforceable put-back requirements, carve-outs for matters of public concern, and stiff and automatic penalties for false takedown notices to prevent abuse to help mitigate these issues. There can and will be overreach, and examples are already cropping up of anticipatory policies that are overbroad and resulting in the restriction of political speech. The drafters and supporters emphasize their desire to protect free expression, noting that “[t]he bill is narrowly tailored to criminalize knowingly publishing NCII without chilling lawful speech” so the rush to pass the bill without stopping to address these serious issues is disheartening at best—and raises concerns that the real motive behind passage was actually more censorial.
A second key problem lies with the bill’s potential effect on encryption and privacy. As currently written, the TAKE IT DOWN Act risks also sweeping private and encrypted services into a system designed for public content takedowns. While the bill wisely excludes email services, it may still apply to platforms like direct messaging apps and cloud storage—tools that serve primarily private functions and where users reasonably expect confidentiality. This creates a serious problem for services that use end-to-end encryption (E2EE). These platforms are built so that even the provider can’t access the content users share. Imposing takedown obligations on them would either require breaking that encryption—undermining the security and privacy that users depend on—or enforcing impossible mandates to remove content they can’t see. Without quick clarification, now only possible through court challenges, the bill risks an interpretation that requires invasive scanning and surveillance across platforms undermining user privacy, including for victims of abuse who rely on encrypted tools for safety. The law should be amended to make explicit that it applies only to public-facing platforms—not to encrypted services or other private communication tools.
Finally, there is a tremendous problem lurking at the heart of this bill: it is enforced by the Federal Trade Commission. Ordinarily this would be a strength; the FTC is an independent agency with a long history of nonpartisan, consumer-focused protection functions. But right now, the independence of the Commission is under direct threat from the Trump Administration, which attempted to illegally fire the Democratic members of the Commission. For the FTC to function, it must remain independent, and passing a law that gives the FTC new powers while it is under such direct partisan political threat undermines the goals of the TAKE IT DOWN Act. Given that Congress has now handed the FTC new powers that could be abused for censorship and retaliation against disfavored platforms, Congress must move decisively to reassert the agency’s independence by ensuring that Commissioners Alvaro Bedoya and Rebecca Slaughter are restored and these lawless attacks end.
The TAKE IT DOWN Act received broad bipartisan support, passing on unanimous consent in the Senate and on a vote of 409-2 in the House, with the lone dissenters being Republican Reps. Thomas Massie (R-KY) (because he assessed it as “a slippery slope, ripe for abuse, with unintended consequences”) and Eric Burlison (R-MO) (whose motives remain entirely mysterious). Few politicians want to be seen voting against a law that offers such a clear solution to a pressing harm—but the consequence is that now we are stuck with a powerful, flawed new law that will depend on courts, tech companies, and the FTC to administer in a fair and balanced way. Hopefully, its worst effect can be mitigated and what remains will provide the victims of NCII abuse genuine relief.
The DEFIANCE Act
One legislative proposal that completely works as currently drafted, and also directly addresses the dignitary harms caused by digital replicas, is the Disrupt Explicit Forged Images and Non-Consensual Edits (DEFIANCE) Act. The DEFIANCE Act is another broadly supported, bipartisan bill and it was passed on unanimous consent in the Senate last Congress. This broad showing of support is encouraging and perhaps obvious given the straightforward and common-sense proposal in the DEFIANCE Act.
The DEFIANCE Act simply amends the Violence Against Women Act (VAWA) to establish a federal civil remedy for victims of AI-generated NCII. Specifically, it clarifies that the existing civil cause of action for the non-consensual distribution of intimate images includes digital forgeries created through artificial intelligence or other technological means. This ensures that victims of synthetic NCII have access to legal recourse, including injunctive relief, punitive damages, and other appropriate remedies. In addition to closing this critical loophole, the DEFIANCE Act includes provisions to enhance victim protections. It extends the statute of limitations to 10 years, and that clock does not start until the victim discovers the violation or reaches the age of 18, whichever is later. The bill also implements privacy safeguards during litigation to prevent further trauma to victims.
Expanding access to the civil remedies in the VAWA for NCII (synthetic and otherwise) is a solid first step in addressing a challenging problem at the intersection of free expression and dignitary harms. The DEFIANCE Act aligns with the principles outlined in our previous writing by providing targeted legal remedies while respecting constitutional rights. By updating existing laws to reflect technological advancements, the DEFIANCE Act expands protections in a strong, predictable way that ensures victims will get access to justice without unexpected consequences in other areas.
The NO FAKES Act
In sharp contrast to TAKE IT DOWN and DEFIANCE, which focus clearly on dignitary harms and the effects digital replicas have broadly, there is the Nurture Originals, Foster Art, and Keep Entertainment Safe (NO FAKES) Act. The NO FAKES Act is clearly drafted to target commercial harms, and it shows in the Big Tech and media interest endorsement list. That alone is not an issue: Our previous posts point out that commercial concerns are one of the long-standing pillars of legal rights regarding name image and likeness (NIL). But the NO FAKES Act misses the mark on how to address these legitimate harms in a way that’s beneficial for everyone and it also fails to address other kinds of digital replica harms.
The NO FAKES Act creates a complex new system catering to the needs of celebrities and media companies. The simple summary is that it creates a new property right (a “digital replica right”) that allows an individual to authorize the creation of digital replicas. But since this law isn’t really written for individuals, it is written for corporate actors, that right is designed to be easily alienable—meaning an individual can sell it, trade it away, or give up in contracts with record labels, movie studios, or Big Tech companies through a license.
The bill places some limits on these licenses, targeting the most obvious kinds of abuse (for example, a minor can’t license their right for longer than five years and automatically get their rights back when they turn 18), but those safeguards are fairly limited. A living adult can license their right away for up to 10 years, so long as the written agreement contains a “reasonably specific description of the intended uses of the applicable digital replica.” What is “reasonably specific” for this law? That will be up to the Hollywood agents and entertainment lawyers to work out for their clients, but it seems likely that we can expect websites to exploit their users’ likeness rights by including “reasonably specific” use clauses in their Terms of Service.
Unfortunately, individuals covered by collective bargaining agreements, like actors represented by SAG-AFTRA, get even fewer protections. Collective bargaining agreements are exempted from the license requirements. Hopefully, unions and guilds would be able to secure better, more protective license terms for their members, but they could also be far worse—selling out less prominent creators to secure better treatment for big stars. Moreover, previous drafts of the bill included language that would have ensured that people were represented by counsel, but that language has been removed in recent versions. Overall, the system created by NO FAKES leaves the overwhelming majority of people more vulnerable to exploitative or deceptive deals than they are today, with the potential to sign away the rights to their likeness for up to a decade, whereas they may actually have stronger protections right now under existing state laws.
Simply put, NO FAKES is designed to facilitate the simple commercial exchange of digital replication rights. Most people don’t want or need that framework; they just need protection from misuse of their NIL rights. That is why our previous writing has emphasized leveraging existing NIL frameworks into a new federal set of protections that simplifies the causes of action at the federal level. NO FAKES is being sold as a way to cut down on deepfakes and unauthorized digital replicas, but it is actually a law designed to create a market for their commercial exploitation. That’s useful if you are a media company, movie star, or record executive, but not great if you’re an ordinary social media user or even an independent creator.
Some of the statements about the NO FAKES Act from its sponsors and endorsers talk about the need to protect against dignitary harms like NCII, but nothing in NO FAKES addresses (or even mentions) NCII. The bill doesn’t contain any enhanced penalties, special procedures, or particular sensitivity to sexualized or abusive imagery; graphic NCII of a teenager and an unauthorized product endorsement are treated the same in all respects. This is a failure to grapple with the harms that are most likely to impact most Americans and specifically tailor this purported solution to solving that problem.
The best that can be said for addressing dignitary harms is that the bill does have a notice and takedown mechanism that allows for anyone to get content taken down quickly and easily. But that highlights the greatest flaw with NO FAKES as currently drafted: the takedown system exacerbates, instead of solving, democratic harms with its overly broad takedown system. In our last post, and earlier writing, we wrote in support of a notice and takedown system, but also noted that it was critical to balance strong protections with safeguards to preserve free expression and against censorship. These are not merely abstract values. The harms that will emerge from over-enforcement and weaponized takedown notices will further erode our democracy, damage our degraded information environment, and strangle the most vulnerable and marginalized voices. Nor is this concern purely hypothetical; the notice and takedown system of the Digital Millennium Copyright Act (DMCA) is notoriously flawed and has often been used frivolously, irresponsibly, and maliciously.
The NO FAKES Act creates a new potential weapon for censorship, one where anyone can require an unflattering or inconvenient image or utterance be stricken from the internet simply by claiming it’s fake. This will include things that are simply unflattering or embarrassing to the rich and powerful, up through matters of grave public concern like a photograph of police brutality, video of rioters at the Capitol, or audio of an elected official caught on a hot microphone—any of these things could be removed with a simple request, with no mechanism to ensure they are restored even if they are proven to be genuine. There are obvious safeguards that this bill could have incorporated, such as enforceable put-back requirements, carve-outs for matters of public concern, stiff and automatic penalties for false takedown requests, and different procedures for sexually explicit material versus other content. Safeguards of this kind would incorporate the hard lessons we’ve learned from the DMCA and years of research on the difficulty of content moderation, but NO FAKES has none of this.
The regrettable conclusion is that NO FAKES completely disregards dignitary and democratic harms in its blind pursuit of the most corporate-friendly commercial protections possible. As we’ve been saying from the start, a property rights-style framework for addressing the commercial harms of digital replicas is probably the wrong approach; it is challenging to create a new system that gets it all right, and there are good alternatives in simply passing federal law that unifies and simplifies the tangle of existing state laws. There are other promising paths too, including the Preventing Abuse of Digital Replicas Act (PADRA) which was introduced at the tail end of the 118th Congress. PADRA focuses entirely on commercial harms by building on the Lanham Act, and even though it also probably requires more analysis and input from stakeholders, that is a better starting point than the NO FAKES Act.
Conclusion and Recommendations
In a moment when consensus on digital replica harms is widespread, it’s disappointing that so few legislative proposals rise to meet the need. As we outlined in our framework, meaningful solutions must address the full spectrum of commercial, dignitary, and democratic harms—and our guidelines make clear that those solutions must also be targeted, rights-respecting, and resistant to abuse.
The TAKE IT DOWN Act is a first, but flawed, step. We are disappointed that the TAKE IT DOWN Act passed in its current form, given the clear and fixable flaws civil liberties and tech accountability advocates warned about. The DEFIANCE Act stands out as a rare success: it directly addresses one of the most serious dignitary harms—synthetic NCII—through focused, constitutional remedies grounded in existing law. Legislation like NO FAKES, on the other hand, not only fails to meet the moment, but actively worsens the landscape by ignoring dignitary and democratic harms in favor of a corporate-friendly property regime.
When it comes to these moments where new technology presents new avenues for harm, it is not enough to act—we have to get it right. With the TAKE IT DOWN Act, Congress only got it half right—and half-right laws can do real damage. Congress should continue to close gaps through passing the DEFIANCE Act, and ensure that it doesn’t double-down on the potential democratic and free expression harms of the TAKE IT DOWN Act by passing an even more flawed notice-and-takedown bill like the NO FAKES Act.