The FTC’s New Report Reaffirms Big Tech’s Personal Data Overreach – What’s New?

The report confirms the data privacy violations that consumer advocates have been sounding the alarm on for years.

The Federal Trade Commission just published a report four years in the making detailing the data privacy practices of several major technology companies, and the findings are somehow both unsurprising and disturbing. Entitled “A Look Behind the Screens: Examining the Data Practices of Social Media and Video Streaming Services,” this investigation reveals how companies collect, retain, and exploit vast amounts of personal data from users and non-users through opaque technological means, often without adequate user control or protection, to power advertising, artificial intelligence systems, and other services in ways consumers might not expect or understand. For years, consumer advocacy groups, including Public Knowledge, have sounded the alarm on these exact issues, only to see proposed legislation and regulatory action languish. Meanwhile, the tech industry and its allies have continued to argue that the sector is too dynamic and crucial to be hampered by external oversight. Their mantra? “Trust us to regulate ourselves.” The FTC’s report shatters this illusion of self-regulation, revealing a significant gap between what tech companies claim to do and their actual practices. This disconnect underscores the urgent need for regulation to protect consumer privacy. 

The FTC’s report was prepared under the agency’s 6(b) authority, which allows wide-ranging studies that may not have a specific law enforcement purpose. It reasserts what many in the public interest tech space have suspected: perverse incentives mean tech companies are simply not best suited to regulate themselves when it comes to user data privacy. Whatever is most profitable for Big Tech’s business model often takes priority over protecting user privacy. What this means is, in exchange for “free” use of online platforms, users surrender unfettered access to their personal data, which is then leveraged to target them with advertising. The extent of data collection and the lack of transparency around how that data is stored, shared, and deleted is deeply troubling. Moreover, there is no common privacy standard for all users across the industry, leaving users vulnerable to the practices of each platform. 

On its face, the ability of online platforms to serve us ads targeted to our unique interests and behaviors may seem hyper-efficient. In reality, algorithms make assumptions about users that may influence their decisions, resulting in discriminatory, invasive outcomes. This practice isn’t limited to consumer advertising, but also discriminatory job ads targeting users based on protected characteristics — like showing preschool teacher positions primarily to women. Dubbed “commercial surveillance” in the FTC report, platforms commodify user behavior both online and offline (via data brokers) to feed into complex algorithms that make inferences about individuals’ interests, preferences, and other characteristics. 

While the companies featured in the FTC study claim not to target ads based on sensitive data like sexual orientation, race, or health status, it turns out these platforms don’t always agree on what qualifies as “sensitive” information, creating a gray area that’s ripe for potential misuse. Because these algorithms used for ad targeting are opaque, it is impossible for users to truly confirm whether the platforms’ commercial surveillance practices perpetuate harmful biases or discriminatory outcomes. As the FTC report makes clear, the lack of standardization and user control over data collection and usage leaves individuals vulnerable to often skewed motivations of these platforms. 

Meanwhile, we hear policymakers and industry representatives say something like, “let’s cool it with the regulation of the most profitable, most innovative industry in the world so it can stay profitable and innovative.” Under this theory, there are some kinks to work through, sure, but tech companies built these platforms, so tech companies are in the best position to fix them.  

And to the industry’s credit, some platforms have rolled out new “trust and transparency” features in response to public pressure, improving policies regarding discrimination and sensitive data use in targeted advertising. For example, Instagram recently announced “Teen Accounts” with default privacy settings and content restrictions for underage users. But a closer look reveals these changes to be little more than the bare minimum. The fact that private accounts are not the default for all users, regardless of age, raises concerns. Meta’s continued collection and tracking of user data without adequate disclosure remains problematic. Additionally, the lack of control for adults over the types of advertisements they see is questionable. As FTC Director Samuel Levine bluntly states in the report’s preface, “self-regulation has been a failure.”

As this is the FTC we are talking about, we have to mention the competition implications of the report. As the report suggests, the ability to acquire and maintain access to significant user data can be a path to achieving market dominance and building “competitive moats” that lock out rivals. The competitive value of user data incentivizes firms to prioritize data acquisition at the expense of user privacy. Without competitive pressure to improve privacy protections, these dominant firms are not incentivized to provide more user-friendly data practices.

Even if these discoveries are not surprising, the FTC’s findings are important, at the very least, to contribute baseline facts to proposed legislation and regulatory action. The report also reaffirms the importance of the FTC’s statutory authority to investigate unfair practices and protect consumers. Despite the Supreme Court’s recent decision to overturn the Chevron doctrine – which had previously granted agencies broad interpretative authority over ambiguous statutes – the Court reaffirmed that agency findings of fact still carry significant weight in legal proceedings. This nuance amplifies the importance of the FTC’s factual discoveries in shaping future tech industry regulations. 

The FTC report makes it clear that we cannot reward companies for doing the bare minimum to shield Americans’ data. Comprehensive privacy regulation is sorely needed to truly protect consumers, and should serve as the floor, not the ceiling. This must include data minimization requirements, which would mandate that companies collect, use, and retain only the personal data absolutely necessary for specific, legitimate business purposes. Data minimization would require platforms to assess and justify data collection practices; implement systems to automatically delete or anonymize data once it’s no longer needed; and design services with privacy in mind from the outset. Going further, consumers must have the right to access the data a company has collected about them, as well as the ability to correct, delete, and seamlessly move that information across different platforms. We need a comprehensive federal privacy law that sets these baseline protections, while allowing states to build upon them further. Underpinning this must be a private right of action, empowering individuals to seek recourse when their privacy rights are violated.

Finally, perhaps the most important takeaway from the FTC’s report is that regulatory agencies like the FTC must retain their broad rulemaking authority to effectively investigate the behavior and business activity of different types of digital platforms, given industry self-regulation has only perpetuated consumer harm. To act on the FTC’s findings, we need, in combination with a comprehensive federal privacy law, a digital regulator designed with the agility needed to keep up with the pace of innovation and empowered to address consumer privacy harm.