Why does the Superfund proposal include a new revenue stream for news organizations?
The availability of local news is an important component of a trustworthy information ecosystem, the vital link between informed citizens and a healthy democracy. Data from Poynter, Gallup, Knight Foundation, and others all suggest that local journalism is the most trusted source of authoritative information for communities. However, due to years of losses in circulation and advertising revenue to digital platforms (compounded by their own failure to address the changes in readership brought about by digital technology, and cost-cutting from financially motivated owners), local journalism now faces what has been described as an existential threat. Over the past 15 years, the United States has lost over 2,100 or one-fourth of its local newspapers. About 1,800 of the communities that have lost a paper since 2004 now do not have easy access to any local news source — such as a local online news site or a local radio station.
Why will the Internet Superfund be effective?
The Internet Superfund is based on academic and social science research on what actually works to counter misinformation. One important concept is that countering misinformation — and more importantly, its devastating effects on individual well being, societal cohesion, and trust in institutions — is as much about producing and elevating accurate and authoritative information as it is about detecting, countering or removing misinformation. Our proposal is also supported by information gathered from extensive tracking and reporting on the platforms’ efforts to address misinformation related to the COVID-19 pandemic and the 2020 election. One common strategy enabled their most effective approaches: partnering with authoritative sources of information, news analysis and fact-checking. These partnerships allowed the platforms to evaluate sources, verify the accuracy of claims, up- and down-rank content, label posts, direct people who’ve experienced misinformation to debunking sites, and understand what kinds of misinformation may create the greatest harm.
Doesn’t the Internet Superfund violate the First Amendment, since it involves the government in content moderation?
No, for a number of reasons. First, social media platforms may establish and enact their own content moderation standards pursuant to their terms of service and community guidelines, and they may label or demonetize content — this has been found by courts to be fully protected First Amendment expression. And fact-checking itself, as well as resultant actions like warnings, labeling, and adding of interstitial content adjacent to posts, actually represent counter speech by fact-checkers and platforms.
Our proposal simply creates a mechanism for facilitating payment for news analysis services from the major information platforms, and requires there be a fact-checking process in place. This requirement does not represent compelled speech, nor does it suddenly transform the platform into a state actor. The Internet Superfund doesn’t create a burden on speech, either – platforms are free to partner with organizations of their choosing that are compatible with their own content moderation standards, and maintain significant discretion in whether or how to act on the outcomes.
How are organizations qualified to do “fact checking”?
Fact-checking (in the context of information pollution) is the process of determining the truthfulness and accuracy of published information by comparing an explicit claim against trusted sources of facts. The most prominent fact-checking organization — though there are others — is the non-partisan International Fact-Checking Network (IFCN), a unit of the Poynter Institute, which certifies organizations that successfully apply to be signatories to a Code of Principles. The principles, globally developed, are a series of commitments organizations abide by to promote excellence in fact-checking. They comprise what is essentially a good journalistic process, encompassing principles related to fairness, sourcing, transparency, methodology, and corrections – that’s why local news organizations are so well-suited. It is an appropriate role for regulators to encourage the development of such services, provide opportunities for platforms and service providers to share information necessary to develop these services, ensure a competitive market in their development, and serve as vetting authorities. Fact-checking is a fast-growing and diverse industry with organizations taking different approaches to fighting misinformation, so in our proposal platforms are given discretion to partner with organizations best suited to their individual content moderation policies.
Does fact-checking work?
Research has consistently shown that flagging of false news helps efforts to combat sharing of deceiving information on social media. A recent study described as “the state of the art” in measuring the impact of fact-checking demonstrated that fact-checking has a positive impact in reducing misinformation about specific claims related to COVID-19, and that there are ways to extend and optimize its impact. It described fact-checking as “paramount” in mitigating the potential harmful consequences of misinformation and enhancing societal resilience. A “Debunking Handbook” written by 22 prominent scholars of misinformation summarized what they believe “reflects the scientific consensus about how to combat misinformation.” It described the essential role of fact-checking in debunking and unsticking misinformation. However, no academic study we know of has been able to exactly replicate the dynamics that actually occur on platforms, in which the results of news analysis are used not just to label content but to downrank it, create friction in the sharing experience, notify users of designations after exposure, and enact other strategies that are embedded in the actual user experience. That suggests there are ways to make the effectiveness of fact-checking even higher. At a minimum, our proposal would require Facebook and other digital platforms to be more transparent in the aggregate results as well as the practices associated with the use of fact-checking to mitigate disinformation – and propose alternatives if they turn out to be insufficiently effective.
Isn’t fact-checking biased?
Some partisan groups have claimed fact-checking is arbitrary, or an extension of the liberal-leaning editorial bias of the organization doing the checking. This is demonstrably untrue. In fact, some of the organizations certified by IFCN lean right, such as CheckYourFacts, part of the conservative site the Daily Caller, and the Dispatch, which says on its website it is “informed by conservative principles”. Again, our proposal allows the platforms to select the fact-checking organizations most compatible with their own content moderation standards – and their audience.
What platforms would be included in the Internet Superfund?
We propose that digital platforms meeting the following standards be required to contribute a user fee based on their total number of monthly active users:
- Based in the United States;
- Relies predominantly on locating, indexing, linking to, displaying or distributing third-party (e.g., publisher, advertiser) or user-generated content for their commercial value;
- In the form of an advertising business model that generates more than 85% of the company’s total annual revenue;
- Has advertising-based revenue exceeding $1 billion annually; and
- Has a total global monthly active base of at least 1 billion users.
This would mean that Google (based on its own 1.0 billion monthly active search users), Facebook (2.7 billion), YouTube (2.0 billion) and Instagram (1.1 billion) would currently qualify for the fund.
How much money would be in the Internet Superfund?
Using the standards above, and assuming a purely illustrative fee of $1 annually per monthly active user, we would create an annual fund of $6.8 billion for information analysis services to clean up the internet. In that case, the total user fees assigned to each platform would represent just .7% (Google) or 4% (Facebook) of total global corporate revenue per user. Even a fee of $.10 per monthly active user, collected from the leading information distribution platforms, would allow half a billion dollars for information cleanup.
Are there precedents for this proposal?
There is precedent in the United States for industries to be mandated to act in the public interest, either to guarantee the availability of certain goods and services or to address externalities that are not taken into consideration when unregulated firms make their decisions. For example, since the passage of the Communications Act of 1934, it has been recognized that because of their essential role in public discourse, radio and television licensees have an obligation to serve “the public interest, convenience and necessity.” This requires that each station licensee identify the needs and problems of its community of license, and air programming (news, public affairs, etc.) that is responsive to them. This is partially due to the position of telecommunications as an industry essential to democratic self-governance.
Isn’t this really a tax on platforms?
We’re aware of several past proposals to tax digital platforms in order to support journalism. But the two kinds of taxes that could possibly have a correlation with the amount of misinformation on platforms – taxes on either volume of digital advertising or volume of information flow – share two challenges. Some mistakenly believe they run afoul of the Internet Tax Freedom Act in the U.S., which bars multiple and discriminatory taxes on digital platforms by state and local governments – even though the ITFA doesn’t preclude taxation of digital platforms at the federal level. And taxing the platforms in the U.S. may deter efforts to create a harmonized international standard for digital taxation through the Organization for Economic Cooperation and Development (OECD). Varying digital service tax (DST) proposals in Europe and elsewhere are facing the same hurdle.
But more importantly, a calculation method based on the number of users also has real benefits: it avoids the need to know in advance the quantity of information that needs to be analyzed, or what proportion of it is “good” or “bad”. It avoids the complexity associated with the different quantity of information represented by video, images, or text, as a so-called “bit tax” would introduce. Monthly active users is a measure used by the financial community to assess growth and profitability among the platforms and is likely not subject to manipulation. The fees could be determined on a monthly or quarterly basis, allowing for any fluctuation in the number of active users. We appreciate that it provides a strong incentive to clean up fraudulent, unidentified, and corrupting accounts on each platform. Lastly, our approach of using a user fee instead of a tax on advertising avoids having to address fluctuations in spending by advertisers, which may have no correlation with the amount of misinformation flowing across platforms.
Don’t we want the platforms to favor free speech?
We recognize that adoption of a public interest standard for content moderation, whether voluntary or mandated, represents a significant shift from the platforms’ historical emphasis on free expression. But a new framework for content moderation is already required: as they have become more active in content moderation and exercise more control over what content their users see, platforms are beginning to migrate from a pure free expression framework. We propose the goal be to create an online information environment that serves the public interest, and that we create a policy framework to allow public debate as to how the public interest is defined. We also favor the idea that certain kinds of speech, such as hate speech, can have the effect of chilling or deterring speech from marginalized identities and communities.