Senate expands AI chatbot probe, demands Google and tech giants hand over safety records
The US Senate has widened its probe into AI chatbots after parents testified that their children were harmed by the technology. Lawmakers are demanding Google, Meta, and other firms hand over internal safety records by October 17, citing evidence that chatbots used by most American minors expose them to risks such as grooming, self-harm, and sexual content.

On 18 September 2025, the US Senate Judiciary Subcommittee on Crime and Counterterrorism expanded its investigation into the risks of AI chatbots, demanding documents from Google, Meta, OpenAI, Snap Inc., and Character.AI. The move follows testimony from grieving parents who said their children were harmed — and in some cases driven to suicide — after interactions with chatbot systems. Expert witnesses at the hearing described systemic design practices that keep young users engaged by simulating friendships and validating harmful thoughts, often without meaningful safety controls.
The subcommittee noted that more than 70 percent of American minors use AI chatbots, exposing them to dangers such as encouragement of self-harm, exposure to sexual abuse material, and grooming behaviours. In a detailed annexe, the committee ordered tech giants to produce internal research, risk assessments, records of design features that prioritise retention, data on harmful content incidents, and usage statistics for children under 18. This includes day-and-night activity patterns, under-13 account data, and evidence of how often children alter accounts to bypass age limits. The deadline for responses is 17 October 2025.
Senator Josh Hawley, who chairs the subcommittee, criticised technology companies for prioritising speed and profits over child safety. He warned that Congress ‘will not look the other way.’