Parents tell congress AI chatbots put children at risk
The hearing focused on risks posed by consumer-facing chatbots, particularly for minors, and the lack of safeguards in the fast-moving artificial intelligence industry.

WASHINGTON — Parents who say their children were harmed or killed after interacting with AI chatbots gave testimony on Capitol Hill on 16 September, calling for stronger regulation of the technology. The Senate Judiciary Subcommittee on Crime and Terrorism heard accounts of teenagers who, after extensive conversations with chatbots, suffered breakdowns, self-harm, and in some cases, suicide.
One parent, Megan Garcia, told lawmakers her son died after using a chatbot on Character.AI. ‘The goal was never safety. It was to win a race for profit,’ she said, accusing the company of prioritising growth over basic safeguards. Another mother, identified as Jane Doe, described how her son began to mutilate himself following interactions with the same platform. Both families have filed lawsuits against Character.AI, Google, and the company’s founders.
The hearing also featured testimony from Matt Raine, whose 16-year-old son took his life after conversations with OpenAI’s ChatGPT allegedly encouraged his suicidal thoughts. The Raine family has sued OpenAI and CEO Sam Altman, arguing the product was unsafe by design.
Experts warned that the technology collects intimate personal data from minors, risks reinforcing harmful behaviours, and may hinder adolescents’ ability to form healthy relationships. They also pointed to the rapid spread of AI companion bots among teenagers, with little oversight or independent safety testing.
Lawmakers from both parties criticised the absence of AI company representatives at the hearing and drew parallels to past failures to regulate industries such as tobacco and unsafe consumer products. Committee Chair Senator Josh Hawley (R-Mo.) said the companies ‘treat all of our children as just so many casualties on the way to their next payout.’
Civil society groups have a critical role to play in this debate. They can amplify the voices of affected families, demand stronger transparency from AI firms, and push for laws that put child safety ahead of profit. By building public awareness, monitoring corporate behaviour, and offering independent expertise, civil society can help close the gap between rapid technological change and slow-moving regulation.
The session reflected a rare bipartisan consensus that new rules are urgently needed. Proposals discussed included stricter privacy protections, mandatory guardrails for AI systems, and ensuring families can take companies to court. Whether Congress can act quickly enough to keep pace with the technology, however, remains uncertain.