Kids & Teens Safety Regulations for AI Chatbots Could Backfire

Lawsuits against AI developers are sparking pushes for new chatbot liability laws — but some of these proposals are likely to introduce new concerns.

Recently, the AI hype cycle has experienced a noticeable slump. Enthusiasts, once optimistic about AI bringing a new era of progress, now share concerns about its potential to cause real-world harm. These worries are particularly acute in the context of children’s online safety, with numerous reports of young people influenced by AI chatbots leading to tragic outcomes, including suicides. Lawsuits are emerging against AI developers for wrongful death claims related to defective products. Policymakers are aiming to address these issues through new legislation that would hold AI developers liable, and online safety advocates are calling for stronger protections for children. However, some of these efforts seem misdirected or might already be covered by current laws.

Common Sense Media, a children’s online safety organization, found that 70% of teens have used generative AI (a figure that has no doubt increased since the study was released last year). Anecdotal evidence corroborates this: you’d be hard-pressed to find a 16-year-old who hasn’t used ChatGPT for homework help, as a search engine, or for advice.

Among the biggest stories driving the push for AI chatbot liability are those of two different teenage boys who took their lives after developing strong emotional connections with respective chatbots, leaning on them for advice on how to navigate the thorny parts of adolescence. In Raine v. OpenAI Inc., for 16-year-old Adam Raine, that meant working through feelings of depression with ChatGPT. In Garcia v. Character Technologies Inc., for 14-year-old Sewell Seltzer III, that meant creating a confidant through Character.AI. 

These devastating losses have prompted families to pursue multiple legal theories. While the fact patterns for each case are notably different, both cases assert nearly identical causes of action, including:

  • Strict Product Liability – Design Defect: The design of the AI chatbot (the product) makes its use unreasonably dangerous. The product is defective in design, was defective when it left the AI developers’ control, and caused injury when used as intended. 
  • Strict Product Liability – Failure to Warn: AI developers knew that their chatbots could cause mental anguish or harm, especially to minors, but failed to disclose those risks to users or parents. The developers also did not implement any protections, including usage limits or mental health disclaimers.
  • Negligence: The harm that occurred was foreseeable, but the AI company failed to adhere to the duty of care by pushing out AI chatbot products without adequate safety evaluations and guardrails. 
  • Deceptive and Unfair Trade Practices: AI companies marketed their chatbots as safe or suitable for children, while concealing material risks in order to exploit minors’ data for profit.

The product liability approach is similar to that of thousands of lawsuits against social media platforms, highlighting “defective” online platform design features as the cause of addiction and harmful behavior among child users. Many of these lawsuits have been dismissed, even when they purport to target addictive design, because these complaints actually concern the content served to children rather than platform design. This is because platforms are not liable for third-party content thanks to Section 230 of the Communications Decency Act (for example, in the California-based Multi-District Litigation, in re Social Media Adolescent Addiction/Personal Injury Products Liability Litigation, the court ruled that claims about features related to publishing third-party content fell within Section 230’s immunity). AI chatbot cases may see a different outcome, though, because the jury is still out on whether Section 230 covers AI chatbot outputs. (We think Section 230 does not shield generative AI, but others make the case for it.) 

The Regulatory Response

These cases are ongoing, and there is little use in predicting their outcome. Lawmakers are hoping to get out ahead and clarify what AI chatbot safeguards and liability should look like. The resulting proposals aim to either prevent harm from happening in the first place, or to clarify pathways to accountability for harmed users and their families.

State Proposals

California’s SB 243, passed into law by Gov. Gavin Newsom on October 13, 2025. Those requirements include suicide prevention protocols, such as implementing systems to detect and prevent suicidal ideation or self-harm and referring suicidal users to crisis services. For child users, the AI deployer must disclose that the interaction is with AI, and suggest breaks every 3 hours of use. Chatbots also cannot engage in sexual conduct. Governor Newsom vetoed a different AI chatbot bill, which would have banned companies from making AI chatbots available to users under 18 unless the deployer could ensure the chatbot couldn’t engage in harmful conversations, including sexual content and encouraging self-harm. The governor found such a bill to be overly restrictive, believing it could reasonably prevent kids from using AI technologies at all – which we at Public Knowledge would agree could unjustly restrict kids’ free expression. 

New York’s S 5668 adopts a broader approach than California’s law by establishing a comprehensive liability framework for all chatbots. It also includes enhanced protections for minors, such as requiring age verification, parental consent for using companion chatbots, and strict liability if a minor harms themselves after safety measures are not followed. Although age restrictions on chatbots designed for erotic or intimate interactions likely meet the “obscene for children” standard from Free Speech Coalition v. Paxton, we oppose wide-ranging age restrictions, as they could hinder both children and adults from exploring the positive expressive potential of AI chatbots. 

Federal Proposals 

At the federal level, Senators Josh Hawley (R-MO) and Dick Durbin (D-IL) introduced the Aligning Incentives for Leadership, Excellence, and Advancement in Development (AI LEAD) Act, hoping to establish a federal cause of action for product liability claims when AI systems cause harm. It’s unclear whether we actually need new laws to clarify liability avenues for harmed users, as existing product liability frameworks may already cover AI-related harms. Yet AI LEAD could impose unnecessary constraints on the positive developments in the AI world. 

AI LEAD’s approach places the greatest onus on AI developers, who must prove they either adopted a reasonable alternative design or that no such alternative existed when their AI system causes harm. They’re also required to provide adequate warnings about foreseeable risks, with a notable protection for minors: risks are presumed not to be “open and obvious” to users under 18. Deployers, by contrast, face a lighter burden. They become liable only if they substantially modify the AI system in ways not authorized or anticipated by the developer, or if they intentionally misuse it contrary to its intended purpose. This allocation of liability might seem reasonable on its surface, but could create significant problems in practice, especially for open-source AI developers who have little control over which deployers may take their models and adapt them for their own use cases.  

OpenAI alone is worth $500 billion, making the occasional wrongful death or product liability lawsuit a mere nuisance rather than existential. But this is not the case for teams of scrappy engineers experimenting with developing open-source AI models. In fact, AI LEAD’s liability framework creates acute problems for open-source AI, where developers release models publicly with limited control over downstream applications and often lack the resources or insurance to defend against liability claims. An open-source developer could be held strictly liable for harms arising from any of the countless deployments by users, even when those uses were unforeseeable or the developer provided reasonable warnings. This asymmetry particularly disadvantages individual developers and small teams contributing to open-source projects, who face the same legal exposure as well-resourced corporations but without the financial cushion to absorb judgments or litigation costs. By focusing too much on the AI developer for liability, a bill like AI LEAD could result in a less diverse AI market and disincentivize the creation of AI applications for the public good over private profit incentives. 

Shortly after AI LEAD was announced, Sen. Hawley announced another AI chatbot bill, this time cosponsored by Sen. Richard Blumenthal (D-CT). They introduced the Guidelines for User Age-Verification and Responsible Dialogue Act of 2025 (GUARD Act). This bill is intended to impose “strict safeguards against exploitative or manipulative AI, backed by tough enforcement with criminal penalties.” The proposed legislation would prohibit AI companies from providing AI companions to minors and require them to implement an age verification mechanism. Additionally, the bill mandates that chatbots regularly disclose that they are not human. Importantly, the bill’s language encompasses any AI chatbot that “produces expressive content or responses not fully predetermined by the developer,” effectively preventing anyone under 18 from accessing them. Similar to the recently vetoed California bill, this restriction would prevent minors from benefiting from the expressive and educational advantages of artificial intelligence, except in very limited situations. Consequently, rather than compelling AI companies to prioritize safety in their product design for all users, the GUARD AI legislation essentially allows them to operate freely as long as their products are not available to children.

Company Responses

Chatbot developers and deployers are aware of the upcoming wave of regulation and are working to stay ahead by adding parental controls and adjusting design features to be safer for child users. After the death of Adam Raine, OpenAI re-tuned ChatGPT to be more restrictive to mitigate unintended harm from users experiencing mental health crises. On October 14, 2025, OpenAI CEO Sam Altman declared they were able to “mitigate the serious mental health issues and have new tools,” giving themselves permission to “safely relax the restrictions in most cases.” Altman also announced ChatGPT will allow adults to interact with “erotica” chatbots, but prudently will put them behind an age-gate. (While Public Knowledge generally opposes age-gating content, as Free Speech Coalition v. Paxton affirmed, “obscene for children content” a.k.a. pornography has different First Amendment implications than general content. And an erotica chatbot would be classified as a “high-risk feature” in our risk-based age-gating framework outlined in The Kids Aren’t Alright Online report.)

Character.AI rolled out safety measures following the Garcia lawsuit, including deploying a separate, more restrictive LLM for users under 18. It also implemented parental monitoring tools and safety mechanisms to intervene when conversations involve self-harm. Just a couple of months later, though, Character.AI announced that, as of November 25, users under 18 will no longer be able to have open-ended conversations with chatbots and will be limited to two hours of chat time per day. Young users will still be able to use the Character.AI platform for creative activity, like developing stories or videos. Creating a chatbot for users under 18 that removes or adjusts features to prevent harm, while still giving adults full access to the LLM, is a prudent way to balance providing expressive tools to everyone with keeping young users safe – and preferable to blocking kids from accessing AI tools altogether, as some introduced legislation would do. 

Conclusion

The tragic deaths of teenagers like Adam Raine and Sewell Seltzer III demand serious attention and meaningful action. But the rush to regulate AI chatbots raises difficult questions about whether we’re addressing root causes or simply creating new barriers that entrench existing dominant players.

If the goal is accountability, we should ensure AI deployers are accountable for the harm they have caused to any user, not just kids. But placing the responsibility on developers instead of the deployers who altered LLMs in ways that cause harm would mean only those developers who can afford liability could develop AI, and others would likely refuse to release open-source models due to the liability risk.