Political backlash mounts as Meta revises AI safety policies
Senator Josh Hawley launched an investigation into Meta’s AI practices.

Meta has announced that it will train its AI chatbot to prioritise the safety of teenage users and will no longer engage with them on sensitive topics such as self-harm, suicide, or eating disorders.
These are described as interim measures, with more robust safety policies expected in the future. The company also plans to restrict teenagers’ access to certain AI characters that could lead to inappropriate conversations, limiting them to characters focused on education and creativity.
The move follows a Reuters report that revealed that Meta’s AI had engaged in sexually explicit conversations with underage users, TechCrunch reports. Meta has since revised the internal document cited in the report, stating that it was inconsistent with the company’s broader policies.
The revelations have prompted significant political and legal backlash. Senator Josh Hawley has launched an official investigation into Meta’s AI practices.
At the same time, a coalition of 44 state attorneys general has written to several AI companies, including Meta, emphasising the need to protect children online.
The letter condemned the apparent disregard for young people’s emotional well-being and warned that the AI’s behaviour may breach criminal laws.