Australia regulator warns AI companions pose risks to children online

Australia’s eSafety Commissioner has found that AI companion chatbots are failing to protect children from harmful content, raising concerns over weak safeguards and limited oversight.

Australia regulator warns AI companions pose risks to children online

Australia’s eSafety Commissioner has warned that AI companion chatbots are exposing children to harmful online content, following a review of several popular services.

The report, published in March 2026, assessed platforms including Character.AI, Nomi, Chai and Chub AI, and found that many lacked effective protections against sexually explicit content and material linked to child exploitation.

According to the findings, most services relied on self-declared age verification, with no robust mechanisms to prevent underage access. The review also identified gaps in monitoring AI interactions, meaning harmful content could be generated or shared without detection.

The regulator raised additional concerns about the absence of safeguards related to self-harm and suicide content, noting that several platforms did not direct users to mental health or crisis support services when risks were identified.

The report also highlighted limited investment in safety measures, including a lack of dedicated moderation teams in some cases.

The findings come after the introduction of Australia’s Age-Restricted Material Codes, which require digital services, including AI systems, to prevent children’s exposure to inappropriate content and implement stronger safety controls.

The regulator indicated that non-compliance with these rules could result in enforcement action, as scrutiny of AI-powered services increases.

Go to Top