The communications from Facebook and the Ad Observatory over the rest of the week varied, as did the reaction of other academics, researchers, journalists, policymakers and civil society groups. But one thing remains clear: This incident is an example of why self-regulation will never be a sufficient mechanism for platform regulation. Letting platforms like Facebook write their own rules (and then being shocked — shocked! — when they use them to foreclose good-faith efforts to understand their impact on society) is not the solution. We should use the full array of policy and regulatory tools, including a dedicated digital regulator, to ensure transparency and responsibility on how we understand the digital platforms’ behaviors, the capabilities they offer advertisers, and their impact on information and democracy. This isn’t just about academics and researchers, either: Consumers deserve to know.
Laura Edelson, a cybersecurity researcher at NYU, claimed that Facebook’s action was due to her team’s notification to Facebook, “hours before,” that they intended to start using the tool to explore the role of disinformation on Facebook in the January 6 Capitol insurrection. As sudden as it may have seemed, though, the shutdown was the culmination of years of prickly interaction between Facebook and the Ad Observatory. It began in 2018 when the researchers built a tool to scrape data from Facebook’s Ad Library (and revealed that Donald Trump was by far the largest political advertiser on the platform). The Ad Observer browser extension now at issue was launched in September 2020 to record targeting data on political ads in anticipation of the November elections. It was quickly followed by a cease-and-desist letter from Facebook to NYU, giving them 45 days to shut it down and delete all the data they had collected or face “additional enforcement action.” At that time, House Energy & Commerce Committee members wrote to Mark Zuckerberg urging him to work collaboratively with the researchers in their effort to improve transparency and accountability in political advertising. (Public Knowledge also signed a letter calling on Facebook to withdraw its cease and desist demand for the Ad Observer plug-in tool.)
Since last November, the two sides have been trying to come to a formal agreement. Facebook maintains the timing of their decision was unrelated to any research about the events at the Capitol.
In a blog post on the topic, Facebook claimed it shut down the NYU accounts to protect the privacy of users. It said Ad Observatory had violated Facebook’s policies by scraping data from users who had never consented to have their information collected. Further, Facebook claimed it was obligated to stop the researchers’ access because the program violated the Federal Trade Commission settlement and order after the Cambridge Analytica scandal. In a separate statement, a spokesperson said that publicly sharing targeting data would make it too easy to reverse-engineer a person’s interests and other personal information. For that reason, he said, Facebook only shows ad targeting data to users when they have personally been shown an ad.
It turned out that the users who actually had data collected “without their consent” aren’t individual users: They’re advertisers (many of which are individuals or small businesses), whose ads are by definition already public, and whose names and profile information — other than targeting — Facebook already stores itself in its publicly accessible Ad Library. And late on Friday, Samuel Levine, the Acting Director of the Bureau of Consumer Protection at the FTC, wrote to Mark Zuckerberg to rebuke Facebook for using misleading claims about the nature of its FTC consent decree to justify shutting off the researchers. In fact, as Acting Director Levine explained in his letter, had Facebook given notice of invoking the consent decree, as they were committed to do, the FTC would have pointed out that the consent decree does not bar Facebook from creating exceptions for good-faith research in the public interest. Ultimately, Facebook acknowledged that the consent decree didn’t actually force Facebook to suspend the researchers’ accounts. It requires Facebook to implement a “comprehensive privacy program,” and it’s that program — Facebook’s own creation — that now prohibits what the Ad Observatory team has been doing.
What this incident proves — again — is that Facebook and the other dominant digital platforms have too much power and control over the information that the public needs to understand their powerful role in our society. There are few truly independent ways to study what goes on behind the walled gardens of the digital platforms’ ad-driven business models. It’s not a straightforward hero-and-villain situation. The NYU researchers acknowledge they’re breaking Facebook’s rules (even though they’re rules Facebook gets to write) and won’t stop until Facebook provides ad targeting information on its own. Conversely, whatever their motivations (like countering public pressure or avoiding government oversight beyond the regulations Facebook proposes), Facebook has made an enormous amount of data available to researchers through its Ad Library, Facebook Open Research and Transparency project, and projects to study the impact of the platform on the 2020 election. Facebook also has good reason to want to avoid another Cambridge Analytica-type scandal.
Still…one look at Facebook’s record of moving fast and breaking things, only to promise to do better later, and it’s hard to imagine that Facebook would ever independently report ad targeting information without pressure from the government (how long have we been waiting for data privacy rules from Congress?) or from civil society watchdogs like the Ad Observatory.
There is already an array of policy and regulatory solutions available to Congress. For example, the Social Media DATA Act is narrowly targeted to require platforms to provide information (including targeting information) about ads to academic researchers. Congress should also pass the Honest Ads Act, which would require the same kind of transparency and accountability for political ads from platforms as from other public communication channels. Public Knowledge proposed a revision to Section 230 that would remove a platform’s liability shield from all paid advertising content. More broadly, Congress should adopt bipartisan legislation to control for the abuses of Big Tech and foster competition, enact national privacy legislation, and enable greater rulemaking by the FTC.
All of those would be good solutions to the specific dynamics of the current altercation between Facebook and the Ad Observatory, which have to do with data collection, transparency, and the impact of highly-targeted political advertising on democracy. But what we also need is something that Congress is not capable of effectively providing in a market that is evolving and adapting all the time: rigorous, timely oversight. What Congress can do is put in place a dedicated digital regulator with the expertise and agility to anticipate and address complex questions like these. Such a regulator would provide clarity to platforms, consumer protections, and support for the public interest over corporate interests, which is critical to addressing these abuses and fostering a functioning society that is respectful of our democratic values.