YouTube Takedowns Offer a Chilling Look at What a Filtered Web Could Look Like
YouTube Takedowns Offer a Chilling Look at What a Filtered Web Could Look Like
YouTube Takedowns Offer a Chilling Look at What a Filtered Web Could Look Like

    Get Involved Today

    If you happened to page through this week’s Sunday edition of The New York Times, you might have noticed something unusual about the front page of the business section. Instead of leading with a story about AIG’s transgressions or the latest eco-tech startup, this week’s Sunday business section gave top billing to the brewing conflict between YouTube users and aggressive rights holders, most notably the “big four” record labels. While the placement of the article may have struck some as odd, the editors at the Times were right to file the story under “business”. In a web-driven economy, there are fewer commodities more valuable than user-generated content. And as the article ably demonstrates, unlike with most valuable commodities, there’s a cadre of deeply entrenched, extremely powerful companies who possess the ability to disrupt the flow of user-generated content at will. Trivial though it may seem, the outcome of this debate could come to define the very nature of the web that our children inherit–and the long-term viability of the Internet economy may lie in the balance.

    At issue in the Times article is the Warner Music Group’s recent abuse of YouTube’s Content ID system, a system that allows rights holders to flag unlawful uses of copyrighted content for automatic removal. Content ID was ostensibly designed to help music labels and movie studios combat the unauthorized uploading of music videos and film clips to YouTube. Upon its release, however, the labels and studios quickly realized that Content ID was also a highly effective instrument of censorship. By using Content ID indiscriminately, a rights holder can easily compel YouTube to remove any and every use of a work–whether lawful or not–placing the onus on users to actively defend the legality of their works. Of course, it was only a matter of time until a major rights holder used this ability as a bargaining chip, by holding lawful fair users hostage in order to gain an unfair advantage in business/legal negotiations with YouTube.

    In December of last year, the Warner Music Group and YouTube reached an impasse in negotiations regarding the licensing rates that YouTube pays for the use of Warner content. Frustrated with the results of the negotiations, Warner took its ball and went home, using Content ID to remove virtually every video from YouTube that made use of a Warner song. The end result was what Fred von Lohmann at EFF called “YouTube’s January fair use massacre,” a spate of takedowns that saw thousands of remixes, recontextualizations and critical commentaries being pulled from the site.

    Forcing the Web Through a Filter

    While this episode was troubling in and of itself, the larger implications that surround it are even more upsetting. As we’ve mentioned before, Content ID is a flawed tool, a fact that von Lohmann has pointed out time and again. Given that even legal scholars often can’t arrive at a consensus as to what does and does not constitute fair use, it should come as no surprise that Content ID lacks the ability to distinguish between fair uses and infringing ones. Simply put, Content ID allows content owners to dump out the baby with the bathwater if they so choose, leaving it to users–who may not be aware of their fair use rights–to sort things out after the fact.

    Herein lies the fallacy inherent to automated content filtering systems: they purport to do something that they cannot, which is to identify and eliminate unlawful uses of copyrighted content. While tools like Content ID can certainly help rights holders find unauthorized uses of their content online, an automated system alone cannot eliminate unlawful uses without also endangering the rights of lawful users. When used lazily, with little oversight or maliciously, content filtering systems can do more harm than good, a fact that’s evidenced by Warner Music’s abuse of Content ID.

    Despite this fact, both Hollywood and the music industry continue to press ISPs and governments around the globe to expand the use and scope of content filtering technologies. We’ve all heard overtures being made to so-called “copyright filters”: automated systems that use Deep Packet Inspection (DPI) and techniques like digital watermarking/fingerprinting to inspect every bit of traffic that traverses a provider’s network, blocking any bits that are suspected of infringing on copyrights.

    Let’s say that an ISP chose to implement just such a copyright filtering system or that a government successfully mandated the use of such systems (this idea might seem far-fetched but it’s not: a copyright filtering amendment nearly snuck into President Obama’s economic stimulus act a few weeks ago and major ISPs like Comcast and AT&T are already partnering with rights holders like the RIAA to enforce copyrights). What, then, would the ‘filtered’ web look like to the end user? Assuming that such a filtering system would be technically similar to Content ID, we can draw a few conclusions.

    On a filtered web, any file that makes use of a piece of copyrighted content–think remixes, mashups and commentaries–would be blocked at the source and prevented from propagating on the network. Want to shoot a video of your newborn baby to share with your friends and family via YouTube? You had better make sure that there isn’t a TV or radio on the background–otherwise your provider might prevent you from uploading the video to YouTube in the first place. Want to stream Girl Talk’s latest album of recontextualized mashups? Too bad–your provider would kill the download before you’ve had a chance to listen to the first track. Want to watch your friend’s homemade critical assessment of a recent film, complete with brief clips from the film in question? Unless your friend has been granted explicit permission to use those clips, you can forget about watching his video essay.

    Killing the Golden Goose

    As many have noted, much of the Internet’s draw lies in its vast democratizing power, in its ability to give a voice to the previously voiceless. It is, as Harvard law professor Jonathan Zittrain puts it, a “generative” technology, which is to say, a technology that enables creation and not just passive consumption. By empowering end users and placing citizen journalists on equal footing with massive news organizations, the Internet transforms culture into a two-way street, where users can criticize, comment and remix as well as consume. Unlike radio, television and cable (which, as Free Press’ Ben Scott often points out, were also once “generative” technologies, before a series of poor policy decisions turned them into media oligopolies), the Internet invites users to actively participate, by “talking back” to the information that they are presented with.

    The reason that users have flocked to the Internet and that subsequently, the Internet has become such an important engine for economic growth, has a lot to do with this ‘generation’ of content from a diversity of sources. If we prevent Internet users from participating in the conversation–whether by gutting Net Neutrality or implementing automatic copyright filters–we’ll transform the web into yet another medium for one-way, top-down, corporate-driven communications. And in so doing, we’ll kill the golden goose: if active participation and interest in the Internet decrease, so too will advertising revenues and opportunities for commerce on the web.

    In this light, we can see the misuse of automated takedown tools like Content ID–tools that can have clear chilling effects on fair use and free speech–as affronts to both users’ rights and the economic health and stability of this nation. The creation and consumption of user-generated content is unarguably one of the web’s chief draws and the ability to make fair use of copyrighted content is critical to the highly-referential language that Internet users speak. If you discourage fair use, you discourage the creation of user-generated content. And if you discourage the creation of user-generated content, you discourage active, engaged participation in the web community which, in turn, discourages investment in the web itself. It should be quite clear that the imprudent use of copyright filtering and automated takedown systems isn’t just a bad idea–in this economy, it’s an idea that we simply cannot afford to entertain.