Congress Is Still Getting Kids’ Online Safety Wrong

Restricting kids' online freedoms is unlikely to have the desired effect.

Congress says it wants to protect kids online. But once again, the bills moving forward do far more to limit young people’s autonomy and access than to meaningfully rein in the platforms that created these risks in the first place.

Two of the latest proposals, Sammy’s Law and the Kids Off Social Media Act (KOSMA), take very different approaches on paper. One builds a system for parental surveillance. The other bans kids from platforms outright. But both share the same core flaw: Instead of requiring platforms to design safer products, they shift responsibility onto families and children, often in ways that make vulnerable kids less safe, not more.

Sammy’s Law Turns “Safety” Into Surveillance

Supporters of Sammy’s Law often lead with what the bill doesn’t do. It doesn’t mandate universal age verification. It doesn’t block minors from accessing platforms entirely. But that framing misses the point.

Sammy’s Law creates a comprehensive surveillance infrastructure for minors’ online activity. Even without an explicit ban, this level of monitoring fundamentally reshapes how young people can participate online.

Constant surveillance chills speech. When kids know their messages, relationships, and activity can be watched in real time, they are far less likely to communicate freely, explore interests, or seek help. Rather than addressing harmful product design, the bill shifts responsibility away from platforms and onto parents. It assumes that the solution to platform-driven harm is more oversight at home, not safer defaults, less manipulative engagement systems, or stronger limits on how companies amplify content. That assumption misunderstands both technology and family dynamics. LGBTQ+ youth, children in abusive or controlling households, and others with unsupportive parents often rely on online spaces for connection, information, and support they can’t safely access offline. A law that mandates infrastructure for comprehensive monitoring risks cutting off those lifelines entirely.

The surveillance required by Sammy’s Law is also wildly disproportionate to the risks it claims to address. Messaging features are treated as justification for blanket oversight of minors’ accounts, even though the vast majority of kids’ online communication is harmless. Supporters of Sammy’s Law often argue that the bill doesn’t let parents read their kids’ messages; it only notifies them if communications include “harmful” material, like references to suicide or self-harm. But that distinction doesn’t hold up.To generate those alerts, private messages still have to be scanned and analyzed in real time. And once a parent is notified about sensitive topics, especially during a crisis, the practical result is often escalation: forced disclosure, punishment, or deeper surveillance.

Sammy’s Law strips children of privacy in both places it should matter most. Offline, it empowers parents to monitor intimate conversations that kids may not be safe sharing at home. Online, it mandates access for third-party surveillance tools that turn private communication into analyzable data.

The Kids Off Social Media Act Chooses Exclusion Over Reform

If Sammy’s Law relies on surveillance, the Kids Off Social Media Act relies on exclusion.

KOSMA would completely block children 13 and under from accessing social media platforms and ban personalized recommendation systems for all users under 18

This is a blunt, age-based approach to risk. While age distinctions can sometimes make sense, KOSMA applies them uniformly across a sweeping definition of “social media platform,” without regard to how different services function or the actual risks they pose. A small forum, a messaging-based platform, and a video feed driven by engagement algorithms are all treated the same. Children under 13 lose access entirely. Teens aged 13 to 16 are allowed on platforms, but stripped of personalized recommendations, fundamentally changing how those services work without considering whether personalization itself is harmful in every context.

Young people are pushed out of mainstream digital spaces that increasingly function as social, educational, and civic infrastructure, with little evidence that exclusion actually makes them safer.

Different Approaches, Same Result

Protecting children online is a real goal. But laws built around surveillance and exclusion are not a substitute for platform accountability. As Public Knowledge has previously recommended, if Congress wants to make the internet safer for kids, it needs to focus on how platforms are designed, not just on controlling who gets to use them.

Until then, these bills risk making online life smaller and more dangerous for the very children they claim to protect.