Today Public Knowledge launched a new white paper, “The Kids Aren’t Alright Online: How To Build a Safer, Better Internet for Everyone,” to weigh in on the heated debate on child safety online. While lawmakers across the country are eager to pass legislation aimed at protecting young users online, many of these well-intentioned efforts are, in our view, overrestrictive and fail to address underlying contributors to harm.
Our paper argues that the responsibility should be placed on technology companies to design safer products by default, while preserving the substantial benefits that platforms can offer to the learning, creativity, and social development of kids and adolescents. Among our various policy recommendations in our child safety framework as outlined in this paper, one stands out as particularly relevant given recent legal and legislative momentum: our proposal for risk-based age verification.
Age Verification v. Age Assurance
Before diving into this approach, it’s important to distinguish between two related but distinct concepts that may get conflated in policy discussions. Age verification refers to definitive processes that confirm a person’s exact age through official documents like government IDs. Age assurance, on the other hand, encompasses a broader range of methods to establish reasonable confidence about someone’s age range, including age verification as one option alongside less invasive approaches like age estimation technology, online behavioral analysis, or account history review. Our risk-based framework considers this spectrum of age assurance tools, applying more rigorous verification only where the risks truly warrant it, rather than defaulting to the most invasive approach for every online interaction any user – regardless of age – could have.
The Supreme Court Green Lights Age Verification, But That May Not Be the Best Approach in All Circumstances.
This summer, in Free Speech Coalition v. Paxton, the U.S. Supreme Court overturned prior precedent on internet age-gating, upholding a Texas law requiring websites to verify users’ ages before allowing access to sexually explicit content. As Public Knowledge Legal Director John Bergmayer explained in his post, “Protecting Kids Shouldn’t Mean Weakening the First Amendment,” the legal reasoning in this decision lowers the level of constitutional scrutiny applied to content-based restrictions on adult speech. This decision may provide an opening for other vague, privacy-invasive platform policies under the pretense of “protecting the children.” In fact, a few weeks after the Free Speech Coalition decision, the Supreme Court declined to review a lower court ruling that upheld Mississippi’s law mandating social media platforms to verify users’ ages. While everyone can agree that access by minors to pornography has no social benefit and very real harms, the same cannot be said of all social media. Here, the balance of potential benefits and potential harms becomes much more complicated.
While Public Knowledge would rather not see the government erect unnecessary barriers and friction within the Open Internet, there is clear momentum toward a federal law that mandates some sort of age assurance to access online platforms like social media. Rather than focusing on restricting access to lawful, perhaps even beneficial, content with “all or nothing” age verification mandates, age assurance mandates could be applied narrowly, targeting features and functionalities that pose a heightened risk to minors. If a federal age assurance mandate were to happen, policymakers should prioritize policy approaches that 1) are privacy protective and 2) minimize the burden on adults to access online content.
The Supreme Court decision may have given the go-ahead to the 24 states that have passed laws requiring age verification to access obscene-for-minors content online. Add the dozen states that have age verification bills moving through their legislatures, and we have a majority of the country blocking access to platforms with adult content. While these laws are largley focused on restricting access to pornographic content, it is likely that the effects will be felt more widely. As other states follow the example of Mississippi and expand the range of platforms subject to age verification, adults across the United States will have to prove they are old enough to engage in online discourse, access speech, and express themselves online – in other words, those in America will have to demonstrate that they’re old enough to use the internet. Age-gating risks restricting free online expression for adults unless they share personally identifiable information, which can be easily hacked and leaked. We just saw this happen with the Tea Dating Advice app, a platform where women share details about their dates. The app required users to submit ID to verify they are women. 4Chan users, perhaps predictably, hacked Tea’s data, leaking users’ government IDs to trolls online, exposing these women to online harassment and potential fraud.
Learning From Our Peers
There will probably be a lot to learn from our friends across the pond who are testing out a nationwide age verification requirement. The United Kingdom’s “Online Safety Act” went into effect in July of 2025; it mandates social media and search services to require age assurance to access content deemed harmful to children (not just pornography, but content that promotes self-harm, eating disorders, suicide, and other destructive behaviors). So far, reports suggest that implementing the age verification requirements has been less than perfect. Platforms are struggling to implement an effective age verification process that satisfies the law’s requirements, but without much success. To be fair, as “Love Island” U.K. contestants often say, “It’s still early days.” But as of the writing of this article, over half a million people have signed a petition for the U.K. parliament to repeal the “Online Safety Act.”
The “Online Safety Act,” like most U.S. age verification bills, does not specify how online platforms must verify user ages. Many platforms are offering a variety of verification options, ranging from biometric face scanning to uploading an ID or entering a payment card. These different age verification approaches have mixed efficacy and provoke different tradeoffs. Biometric face scanning, for one, merely estimates age rather than verifies it, and has mistakenly classified adults as adolescents, restricting over-18s from adult-appropriate content (the opposite can happen, too, where adolescents are mistakenly deemed to look more mature than their actual age and are allowed access to adult content).
Many users have apparently chosen not to bother with any of this madness by using virtual private networks (VPNs) to disguise their IP address and geographic location, enabling them to access websites as if they are in a different country. As the BBC reported, half of the top 10 free apps in the Apple App Store in the U.K. in July were for VPN services, with one VPN app maker claiming it saw a 1,800% increase in downloads since the “Online Safety Act” went into effect. This is not unlike the increase in search traffic for information about VPNs that occurred once Virginia’s age verification law went into effect in 2023.
There’s a reason free expression civil society groups hammer home that age verification requirements will almost certainly impede the speech of adults. The rollout of the U.K. “Online Safety Act” provides a key example, where platforms feeling the pressure to comply will sweep up broad categories of content that could, if you squint, be indecent for children, all while cutting off adults from accessing speech.
It is not simply a question of blocking adults from engaging in controversial speech. The all-or-nothing approach of age verification, combined with the fear of prosecution by platform operators, means that children will be blocked from non-harmful, even educational and useful, speech. Those going online in the U.K. are documenting the parts of the internet that are unexpectedly blocked. Reddit, for one, blocked access for unverified users to subreddits like r/stopsmoking or r/stopdrinking – communities which, ostensibly, provide resources and support for healthier behaviors. Clips of protests were reportedly blocked on X, formerly Twitter. Spotify users find themselves unable to watch certain music videos or stream songs labeled 18+.
But is a kid perusing the subreddit r/stopsmoking meaningfully different from participating at an anti-smoking campaign at their school? Are teenagers made better off by remaining ignorant of protest movements in their country or around the world? Exposure to controversial content is not inherently harmful, and locking down this kind of material can strip young people of both knowledge and agency. Content gating, like the “Online Safety Act,” assumes all children are passive subjects needing constant control by their parents or the state. Public Knowledge rejects this framing and proposes a different solution – one that recognizes that children and adults being able to freely access information without government intervention is a First Amendment right worth preserving.
A Better Approach – Focus on Harmful Platform Features Rather Than Content
Age verification requirements coming to the U.S. may seem inevitable, but there are still opportunities to influence the shape of these requirements. We have shown that when these requirements are applied too broadly, they cause significant friction for users, particularly adult users, which incentivizes them to seek workarounds – like VPNs. The challenge in enforcing an age verification mandate also means regulators will concentrate on ensuring the law is properly implemented, rather than pursuing actual harms to children.
In our view, rather than mandating blanket age gates, lawmakers should adopt a risk-based standard that scales requirements according to the potential harm of specific features. In our report, we focus intentionally on platform features – not content types – as defining what content is inappropriate for children is largely subjective. We believe content-based age gating would, as we’ve witnessed in the U.K.’s “Online Safety Act,” result in platforms imposing overly broad restrictions on content to avoid liability to the detriment of both adults and the children they are trying to protect.
In our view, the worst online harms to children stem not simply from exposure to indecent content, but from design features that facilitate harmful interactions or compulsive use, like nudge notifications, infinite scroll, and gamification. It also includes features that connect users to unfamiliar contacts, like map features where you can see other users (SnapMap and the recently launched Instagram Map), or options to quickly add suggested connections – even strangers – or send direct messages to users you are not connected with. Looking to features as conduits of harm rather than content gets at more insidious dangers that a focus on content does not address.
Rather than burden all users with verifying their ages to view content online, risky features should be age-gated, while low-risk activities – such as basic content browsing or accessing educational material – should remain unrestricted. Medium-risk features, like posting publicly on social networks, interacting with user-generated content (e.g., likes and comments), or receiving algorithmic recommendations, could rely on age estimation using existing data. Targeted advertising has long used demographic information to deliver campaigns to the intended audience, and large platforms are now implementing more formal, transparent age assurance tools.
For example, Google recently introduced a machine learning age estimation system that interprets account data to determine whether a user is likely over 18 – such as recognizing that a Gmail account created 15 years ago is unlikely to belong to a child – and allows users incorrectly labeled as under 18 to verify their age with a selfie or ID. Age estimation can serve as a practical compliance tool, but only if implemented with strong privacy safeguards that avoid expanded collection, retention, or sharing of personal data beyond what is strictly necessary. The appeals process must also be streamlined, transparent, and easily navigable for adults, ensuring lawful access to content and services is not unduly burdened. Poorly designed systems that impose excessive costs or technical requirements risk entrenching dominant platforms – which can more easily absorb these burdens – while disadvantaging smaller competitors and limiting consumer choice. Given that platforms benefiting financially from targeted advertising have used age estimation tools for years, requiring them to leverage these same tried-and-true methods to make their products safer for children is a reasonable regulatory burden.
High-risk features, like “going live” on a video stream or engaging in stranger-to-stranger messaging, could require stricter age assurance, like using biometric data or a government identification. Bluesky’s U.K. model, where unverified users can still browse feeds but must verify their age to access direct messaging and adult content, is one example of striking a better balance between free expression and child safety for high-risk features.
Conclusion
The risk-based age verification framework we’ve outlined here and in our white paper, “The Kids Aren’t Alright Online: How To Build a Safer, Better Internet for Everyone,” offers a smarter alternative to blunt-force solutions that dominate policy debates on child safety online. By scaling verification requirements to be commensurate with actual risk levels, rather than treating all online experiences as equally dangerous, we can protect child users while preserving their rights to learn, explore, and connect online without punishing adults for accessing the internet.
But age verification is just one piece of creating a truly safer internet for children. Real progress requires addressing the underlying design choices and business practices that prioritize engagement over well-being; implementing privacy-protective defaults; and integrating safety into products from the ground up. Our white paper, “The Kids Aren’t Alright Online: How To Build a Safer, Better Internet for Everyone,” provides lawmakers with recommendations reflecting such principles. You can also join us in Washington, D.C. for a paper presentation and discussion at our September 8 event, “The Kids Aren’t Alright Online: Building a Safer, Better Internet.”