With this new four-part series, Public Knowledge unveils a vision for free expression and content moderation in the contemporary media landscape.
In Part I: Centering Public Interest Values, we provided a brief historical perspective on platform content moderation, reviewed the values that Public Knowledge brings to this topic, and discussed the importance of rooting content moderation approaches and policies in user rights. We also considered a theory that user rights should include the right to hold platforms liable if they don’t enforce the community standards and/or product features they contract for in their terms of service.
In Part II: Empowering User Choice, we discussed the structure of digital platform markets and the necessity of policy choices that create healthy competition and user choice. We also centered digital platforms in the broader ecosystem of news and information, and discussed how policy interventions may offset the impact of poor platform content moderation on the information environment by promoting other, diverse sources of credible news.
In Part III: Safeguarding Users, we turned to policy interventions specifically designed to enhance free expression and content moderation on digital platforms while preventing harm to people and communities.
Here in Part IV: Tackling AI and Executing the Vision, we will discuss the implications for free expression and content moderation of the new “elephant in the content moderation room” – generative artificial intelligence. We will also discuss how our recommended policy interventions can be made durable and sustainable, while fostering entrepreneurship and innovation, through a dedicated digital regulator.
Readers looking for more information about content moderation can visit our issue page, learn more about the harms associated with algorithmic curation of content, and explore why multiple policy solutions will be required to ensure free expression and effective content moderation.
Tackling AI-generated Content
Journalists, researchers, and policymakers are hand-wringing about the potential of generative artificial intelligence (GAI) to further erode trust in information institutions. The worry proved especially salient in 2024, a big election year globally, in which the new availability and sophistication of GAI tools allow bad actors to proliferate deepfaked imagery and disinformation at an exponentially fast rate. Although generative AI, thus far, does not seem to be the source of entirely new disinformation narratives, by virtue of its scale and speed, it still boasts the potential to increase the vulnerability of platform users to polarization, manipulation, health risks, market instability, and misrepresentation. We are already seeing how AI-enabled deepfakes and misinformation worsen distrust in the government and may threaten our democratic institutions. Generative AI has also heightened the “liar’s dividend,” wherein politicians may induce informational uncertainty or encourage oppositional rallying of their supporters by claiming true events are the manifestation of AI.
Platforms are using a variety of methods to identify and moderate AI-generated content. For example, Meta decided it would handle AI-manipulated media by adding “AI info” labels to content on its social media platforms (as well as integrating invisible watermarking and metadata to help other platforms identify content generated by Meta AI). Similarly, the social publishing platform, Medium, requires any writing created with AI assistance to be clearly labeled. Other platforms’ approaches are simply extensions of their existing strategies to mitigate disinformation.
Many policymakers are looking to the proliferation of AI-generated content as a cause for increased content moderation scrutiny and platform regulation. However, some of the harms this content may create are better addressed elsewhere in the ecosystem. For example, there may be regulation that introduces liability for the AI developers or deployers themselves (rather than the platforms that simply distribute such content). (Note that some liability already exists because Section 230, generally speaking, does not and should not protect generative AI). Existing laws can be clarified to ensure the underlying acts (like distribution of child sexual abuse material, or CSAM) are illegal if they are conducted using AI. Or they can be made illegal at the federal level if they are not now (like distribution of synthetic non-consensual intimate imagery, or NCII), which, among other things, would change the platforms’ incentives by placing this content outside the liability protections of Section 230.
Researchers and policymakers have also focused on requirements to track “digital provenance” and ensure “content authenticity” from AI developers to distribution platforms. While this is a promising area, these methodologies remain imperfect and least likely to be adopted or retained by bad actors. And some of these methods raise concerns that they may encourage platforms to detect and moderate certain forms of content too aggressively, threatening free expression. This, too, has the potential to damage our democracy and will likely disproportionately impact marginalized communities.
Policy Parameters for Moderation of Synthetic Content
Public Knowledge has detailed the risks associated with GAI-generated digital replicas, and some of the policy guidelines we advocate for apply here, as well. For example, we advocate for narrow, commonsense protections for our elections, leveraging well-established legal doctrines for how to require disclosures in political advertising, crack down on fraud, and protect the integrity of the electoral process. We urge caution on the potential for over-moderation, censorship, and degraded privacy. Any policy proposal for tackling harms stemming from GAI-generated content should be evaluated carefully to ensure that the solutions will not result in over-enforcement or have collateral effects that will damage free expression or result in democratic harms.
Policymakers should consider the authentication and content provenance solutions that do not rely on watermarking synthetic content. Watermarking synthetic content is an often-discussed policy solution that merits additional investigation, but the technology and techniques being developed are not yet up to the task. An alternative is to invest in solutions to confirm and track the authenticity of genuine content. Bolstering authentic content builds trust in factuality and truth, rather than fixating on rooting out fake and synthetic content. Such an approach will likely have a high rate of adoption among good actors whereas other methods focused on synthetic content would amplify the potency of any disinformation bad actors manage to sneak past detection.
In general, though, Public Knowledge advocates for solutions that address the harms associated with disinformation no matter how they originate. The resulting policy solutions would encompass things like requirements for risk assessment frameworks and mitigation strategies; transparency on algorithmic decision-making and its outcomes; access to data for qualified researchers; guarantee of due process in content moderation; impact assessments that show how algorithmic systems perform against tests for bias; and enforcement of accountability for the platform’s business model (e.g., paid advertising), as described elsewhere in this series.
Legislative Proposals for Moderation of Synthetic Content
As noted above, we believe there are certain circumstances where trade offs between free expression and content moderation are necessary, like in the context of elections. For instance, the AI Transparency in Elections Act requires labeling elections-related AI-generated content within 120 days before Election Day, aligning with existing disclaimer requirements for political ads. This bill attempts to balance constitutional concerns with transparency needs by excluding minor AI alterations and potentially parody or satire. However, its time limitations fail to address post-election AI-related disinformation risks, such as those the nation collectively experienced after the 2020 election. Conversely, the Protect Elections from Deceptive AI Act creates a federal cause of action for content involving a candidate’s voice or likeness and prohibits distributing AI-generated content for election influence or fundraising. While well-intentioned, this legislation could potentially infringe on political speech because, despite the name of the bill, it lacks any requirement that the content actually be deceptive in intent or in effect, and instead presumes that anything AI-generated is deceptive. This would empower candidates to sue over content that is not deceptive or harmful. This approach risks incentivizing litigiousness to silence critics and public debate, potentially leading to the censorship of political discourse by candidates and non-candidates alike, including journalists and nonprofits.
Thanks in part to powerful stakeholders in the entertainment industry, an enormous amount of the current focus on content from generative AI has to do with digital replicas, defined most recently by the Copyright Office as “a video, image, or audio recording that has been digitally created or manipulated to realistically but falsely depict an individual.” There are a couple of bills – namely the NO FAKES Act and the No AI FRAUD Act – specifically aiming to protect public figures’ publicity rights against unauthorized AI-generated replicas and digital depictions and to hold platforms liable for hosting those unauthorized replicas. Public Knowledge does not support these bills: they both adopt a flawed and complex intellectual property rights framework, fail to adequately address non-economic harms, and create problematic platform liability issues that could lead to over-moderation. As noted above, we have previously detailed the harms and explored other potential remedies for digital replicas.
Executing the Vision for Free Expression and Content Moderation
Many of the solutions we have framed in this series will call for ongoing enforcement and evolution as technological capabilities develop over time. Given the pace of innovation in digital technology and the need for specific, technical expertise to regulate it, we strongly believe a sector-specific, dedicated digital regulator is required.
The role of government is constitutionally bound in regard to both citizens’ free expression and platforms’ content moderation. However, there is a strong tradition of promoting positive content (e.g., educational content), public safety (e.g., emergency alert system), and diversity and localism in regulation of electronic media. The fact is, our nation has always used policy to ensure that the civic information needs of communities were met. (Public Knowledge explored this tradition in a white paper explaining how we can combat misinformation through policy uplifting local journalism.) One of the core tenets of this history is that whenever there have been changes in technology, new, evolved, or renewed regulators as well as regulations have episodically been required to ensure that the public interest is protected. Often, this has taken the form of a dedicated, empowered regulator with both the expertise and the agility to understand and address both technological and societal change.
In our view, the same is true today. As we’ve noted in sections above, the concentration of private power over public discourse is itself a threat to free expression. The key elements and functions of a dedicated regulator, such as fostering competition, requiring interoperability, ensuring strong privacy protections, and prohibiting discrimination, would allow more consumer choice and the selection of platforms aligned with a user’s values. The regulator may also have roles more directly related to the theories we have laid out above. For example, it may take on aspects of consumer protection and safety by enforcing requirements for clear terms of service, due process, algorithmic transparency and choice, while also ensuring access to data for researchers. It would also be the appropriate body to determine the role and definition of concepts like fiduciary duties, duties of care, and codes of conduct in regard to content moderation.