In a recent interview, former President Obama called the lack of a “common baseline of fact” the “single greatest threat to our democracy.” Our increasingly polluted information environment — including the proliferation of “crazy lies and conspiracy theories” on social media — divides society while undermining public faith in facts, impeding our ability to face global challenges that threaten our very existence.
And that’s the point. That is, regardless of the topic, at the core of much of the disinformation we see on digital platforms today is the desire to undermine democratic institutions and manipulate followers as a means of exerting political influence and control. It’s not a bug; it’s a feature.
Right now, much of the debate in Congress about how Big Tech should moderate their platforms is focused on Section 230 of the Communications Act, the so-called “twenty-six words that created the internet.” Section 230 protects social media companies from liability related to hosting, editing, or taking down third-party content. At last count, efforts to “fix” it include 16 different bills in Congress (some of which are fundamentally at cross-purposes); an Executive Order and ensuing petition by the National Telecommunications and Information Administration; a potential (and illegitimate) rulemaking by the Federal Communications Commission; several Congressional hearings; a set of recommendations and proposed legislative text from the Department of Justice; an out-there presidential threat to defund the military; and, I’m guesstimating here, kajillions of tweets from technologists, journalists, and politicians.
Enough. We offer a creative, evidence-based policy proposal that lets regulators — and the new administration — step away from the political black hole that most of the current Section 230 reform efforts have become and instead alleviate the twin problems polluting our information ecosystem: the virulent spread of misinformation on digital platforms, and the crisis in local journalism.
A Superfund for the Internet: Content Moderation for the Public Interest, Not Political Interests
Toxic content on digital platforms has been compared to toxic chemicals dumped by industrial companies into fresh water, and “paranoia and disinformation” dumped in the body politic has been described as “the toxic byproduct of [Facebook’s] relentless drive for profit.” This is why we’ve modeled our proposal on the Environmental Protection Agency’s 1980 “Superfund” to clean up toxic waste sites, which the EPA has done at more than 400 sites around the country: a “Superfund for the Internet.” We propose using public policy tools to create a market mechanism to clean up information pollution. This Superfund would create demand and payment for news analysis services such as fact-checking from the major information distribution platforms, and incentivize development and supply of these services among qualified news organizations. The payment would be collected from qualifying platforms and administered to qualifying news organizations through a “trust fund” established and administered by an independent body — one with no role in the actual identification, review, analysis, or action on content.
In the U.S., we’ve mandated certain industries to conduct some of their activities in the public interest, either to guarantee the availability of certain goods and services or to address externalities that are not taken into consideration when unregulated firms make their decisions. For example, since the Communications Act of 1934 recognized the essential role of radio and television in the nation’s public discourse and democratic self-governance, licensees have an obligation to serve “the public interest, convenience and necessity.” In other industries that are deemed critical infrastructure, such as utilities and finance, we have also established regulators and regulations to ensure that the public interest is protected. We believe that the dominance of digital platforms in our political and social discourse therefore qualifies them for a similar public interest mandate. The Superfund for the Internet compels the dominant information distribution platforms to include fact-checking in their content moderation approach, as a means of serving the public interest.
An Evidence-Based Approach to Content Moderation
Our proposal is based on research into what actually works, in practice, to counter misinformation. One concept that comes up frequently is that countering misinformation is as much about producing and elevating authoritative information as it is about detecting and removing misinformation. This “information curation” must be accomplished without imposing on constitutionally protected speech. Information gathered from tracking the platforms’ efforts to address misinformation related to the COVID-19 pandemic and the 2020 U.S. election can support the Superfund for the Internet. Both events showed that — in situations in which the potential for harm is high and the number of sources for authoritative information is finite — platforms can and will set new standards and develop new solutions to problems previously positioned as insoluble. One common strategy enabled their most effective approaches: partnering with fact-checking organizations and authoritative sources of information. These partnerships allowed the platforms to evaluate sources, verify the accuracy of claims, algorithmically up- and down-rank content, label posts, direct people who’ve experienced misinformation to debunking sites, and understand what kinds of misinformation may create the greatest harm — all without placing the platforms in their dreaded position of being “arbiters of truth.”
But so far, these efforts have been entirely at the discretion of the platforms, and their incentives (including the desire to avoid bad publicity and the desire of their advertisers to avoid association with harmful content) are insufficient to address the individual, social, and political problems associated with other kinds of misinformation. Aligning platforms’ incentives with those of the public interest requires policy mechanisms that lower the cost of good behavior and/or raise the cost of bad behavior while not mandating censorship of permissible speech.
Elevating Sources of Trusted Information
Traditionally, local journalism was the primary source of authoritative information for communities. A Poynter Media Trust Survey in 2018 found 76% of Americans across the political spectrum have “a great deal” or “a fair amount” of trust in their local television news (compared to 55% trust in national network news), and 73% have confidence in local newspapers (compared to 59% in national newspapers). A Gallup survey in 2019 found 74% of Americans trust the accuracy of the news and information they get from local news stations (compared to 54% for nightly network news programs), and 67% trust their local newspapers (compared to 49% for national newspapers). A 2019 study from the Knight Foundation’s Trust, Media, and Democracy Initiative with Gallup found that 45% of Americans trust reporting by local news organizations “a great deal” or “quite a lot,” while 15% have “very little” or no trust at all. But in the same survey the public’s view of national news organizations was more negative than positive: Only 31% expressed “a great deal” or “quite a lot” of trust, and 38% “very little” or no trust in national news. In aggregate, the data suggests that the viability and availability of local news are important components of a trustworthy information ecosystem, the vital link between informed citizens and a healthy democracy.
Perhaps because of how much they trust their reporting, 71% of Americans think their local news organizations are “doing either somewhat or very well financially.” However, due in part to years of losses in circulation and advertising revenue to digital platforms (as well as by their own failure to address the changes in readership brought about by digital technology, plus consolidation and cost-cutting by financially-motivated owners), local journalism now faces what has been described as an existential threat. Over the past 15 years, the United States has lost over 2,100 or one-fourth of its local newspapers. About 1,800 of the communities that have lost a paper since 2004 now do not have easy access to any local news source, such as a local online news site or a local radio station, creating “news deserts” for these communities. A recent Congressional report provides more of the historical perspective, including some hard truths about harms to the industry from the anticompetitive behavior of the dominant search and social media platforms. Most importantly, the report confirms it can and should be Congress’s role to build a bridge between the present crisis in local journalism and new solutions like this one.
Checking the Facts About Fact-Checking
Fact-checking (in the context of information pollution) is the process of evaluating the truthfulness and accuracy of published information by comparing an explicit claim against trusted sources of facts. And it works: Research has consistently shown that flagging of false news helps efforts to combat sharing of deceiving information on social media. A study described as “state of the art” in measuring the impact of fact-checking demonstrated that fact-checking has a positive impact in reducing misinformation about specific claims related to COVID-19, and that there are ways to extend and optimize its impact. It described fact-checking as “paramount” in mitigating the potential harmful consequences and enhancing societal resilience. A Debunking Handbook written by 22 prominent scholars of misinformation summarized what they believe “reflects the scientific consensus about how to combat misinformation.” It described the essential role of fact-checking in debunking and unsticking misinformation. And, importantly, some pitfalls that came out in earlier research, like the so-called “backfire effect” in which corrections actually strengthened misperceptions, have themselves been largely debunked.
The dominant information distribution platforms, including Facebook, Instagram, Google, and YouTube, already use fact-checking services. But today, the user experience for fact checks varies widely by platform, and is often opaque or ambiguous. Researchers say it is impossible to know how successful or comprehensive the companies have been in removing bogus content because the platforms often put conditions on access to their data. Even the platforms’ own access tools, like CrowdTangle, do not allow filtering for labeled or fact-checked posts. The platforms control virtually every aspect of their interaction with fact-checking organizations, which have complained that their suggestions for improvements to the process and requests for more information on results go unheeded.
There are strong signs that the effectiveness of the platforms’ efforts could be increased through an independent oversight process. No academic study we know of has been able to exactly replicate what actually occurs when the results of fact-checking are used to label content, downrank it, create friction in the sharing experience, notify users of designations after exposure, and enact other strategies that are embedded in the actual user experience. From what we do know, there may be a critical multiplier effect. Facebook’s website notes that a news story that’s simply been labeled false sees its future impressions on the platform drop by 80%. Facebook has also claimed that warnings on COVID-19 misinformation deterred users from viewing flagged content 95% of the time. Twitter reported a 29% decrease in “quote-tweeting” of 2020 election information that had been labeled as refuted by fact-checkers. Our proposal would require the qualifying digital platforms to be more transparent in their practices as well as the results associated with them, to share best practices, to share privacy-protected data with researchers, and to try alternatives from researchers and civil society groups to improve results.
Some partisan groups have claimed fact-checking is arbitrary, or an extension of the liberal-leaning editorial bias of the organization doing the checking. This is demonstrably untrue. The fact-checkers themselves come from a range of backgrounds, including journalism but also political science, economics, law, and public policy. In fact, some of the organizations certified by IFCN, the most prominent fact-checking organization, lean right, such as CheckYourFacts, part of the conservative site the Daily Caller, and the Dispatch, which says on its website it is “informed by conservative principles.” Fact-checking is a fast-growing and diverse industry with organizations taking different approaches to fighting misinformation. It’s inevitable that some of the mistrust in the media as a public institution has migrated to the fact-checkers. There may be ways to enhance or supplement its role, such as the addition of media literacy tools that help consumers evaluate the news themselves. Our proposal allows the platforms to select the fact-checking organizations most compatible with their own content moderation standards — and their audiences.
A First Amendment-Friendly Solution to Misinformation
The First Amendment and general concerns for freedom of expression require exercising a good deal of caution in any approach to content moderation. But the First Amendment doesn’t prevent any legislative effort to protect individuals, or society as a whole, from harassing or fraudulent content or content that seeks to undermine democracy and civic discourse.
The First Amendment pertains to the role of government in controlling or suppressing speech, not the role of private companies. Social media platforms may establish and enact their own content moderation standards pursuant to their terms of service and community guidelines, and they may label or demonetize content — this has been found by courts to be fully protected First Amendment expression. Fact-checking itself, as well as resultant actions like warnings, labeling, and adding of interstitial content adjacent to posts actually represent counter speech, which is meant to be furthered by the First Amendment. Our proposal does not change these roles of the platforms, as the governing body established for the Superfund for the Internet has no role in the actual identification, review, or action on content. It is simply a mechanism for facilitating the payment for news analysis services from the major information platforms and requires there be such a fact-checking process in place. Although the act of fact-checking is inherently not content-neutral (nor is it required to be), the requirement that a fact-checking process be in place is — and the involvement of the government in establishing this requirement does not suddenly transform the platform into a state actor.
How and How Much To Fund the Superfund for the Internet
We’re aware of several past proposals that call for taxing the platforms in order to create trust funds to help local journalism, but we prefer a different approach. We propose that digital platforms meeting the following standards be required to contribute a federal user fee (defined as a “fee assessed to users for goods or services provided by the federal government”) to the Superfund for the Internet, in an amount based on their total number of monthly active users:
- Based in the United States;
- Relies predominantly on locating, indexing, linking to, displaying or distributing third-party (e.g., publisher, advertiser) or user-generated content for their commercial value;
- In the form of an advertising business model that generates more than 85% of the company’s total annual revenue;
- Has advertising-based revenue exceeding $1 billion annually; and
- Has a total global monthly active base of at least 1 billion users.
This would mean that Google (based on its own 1.0 billion monthly active search users), Facebook (2.7 billion), YouTube (2.0 billion), and Instagram (1.1 billion) would currently qualify for the fund. Assuming a purely illustrative fee of $1 annually per monthly active user, we would create an annual fund of $6.8 billion for information analysis services to clean up the internet. In that case, the total user fees assigned to each platform would represent just .7% (Google) or 4% (Facebook) of total global corporate revenue per user. Even a fee of 10¢ per monthly active user, collected from the leading information distribution platforms, would allow half a billion dollars for information cleanup.
Using monthly active users as the basis of the fee avoids the need to know in advance the quantity of information that will need to be analyzed — the quantity of misinformation (and the potential for harms associated with it) increase based on the number of users of each platform. Monthly active users is a measure used by the financial community to assess growth and profitability among the platforms, so it’s probably not subject to manipulation. It provides a strong incentive to clean up fraudulent, unidentified, and corrupting accounts on each platform. Lastly, it avoids having to address fluctuations in revenue, as a tax would, which may have no correlation with the amount of misinformation flowing across platforms. In order to avoid politicization of the allocations to fact-checking organizations, the funds would be disbursed as fees per hour, amount of content, or number of fact checks completed for the platforms.
Layering Solutions to Misinformation
No one wants to go back to the days when three major networks and a handful of newspapers — all controlled by wealthy white men — held sway over the flow of information critical to civic discourse. And no one market adjustment can totally solve the complex problem of information pollution we face today. As one researcher recently suggested at a conference on the topic, we should be inspired by the “swiss cheese model” of pandemic defense; that is, layer solutions that may be individually “holey,” but when stacked on top of each other, provide effective protection against misinformation. These may include competition policy, or international partnerships, or regulation of the platforms’ internal mechanisms – heck, we’re even open to common-sense reform of Section 230.
Over the past week or so, many of the very same misinformation peddlers that tried to undermine public faith in one of our most fundamental democratic processes — the presidential election — moved on to spreading lies about the coronavirus vaccine, risking a public health catastrophe. This proves two things: Their purpose is manipulation, and they will not stop. We cannot rely on the discretion of the digital platforms to decide if or when to intervene, or on hedge funds to decide to reinvest in local news. Whatever other approaches we may adopt to clean up our polluted information ecosystem, the stakes are too high to do nothing — and the Superfund for the Internet can help.