Australia introduces landmark /online Safety Code
Australia has unveiled a new Online Safety Code requiring digital platforms, gaming services, and AI chatbots to curb harmful material. The rules mandate risk assessments, age checks, and reporting tools to prevent child grooming, sextortion, and the spread of violent or explicit content. Services deemed high-risk, including certain AI companion chatbots, face strict Tier 1 obligations under the Code

Australia has introduced a new Online Safety Code aimed at digital platforms, gaming services, and AI systems, placing fresh obligations on providers of ‘relevant electronic services’ (RES). The Code, developed by industry bodies including the Australian Mobile Telecommunications Association and the Digital Industry Group Inc, sets mandatory safeguards against online pornography, self-harm material, and violent content.
The Code requires providers to conduct detailed risk assessments of their services, assigning them to one of three tiers depending on the likelihood of harmful material appearing. Services deemed ‘high risk’ will fall under Tier 1, facing the strictest compliance measures. Notably, platforms that feature AI companion chatbots must also assess the risks of generating restricted material, such as explicit or violent content. If a chatbot’s primary purpose involves such content, it will be automatically placed in Tier 1, the highest risk category.
Among the compliance measures are new age-assurance tools for adult material, reporting mechanisms for illegal activity, and default safety settings for children. Providers of messaging apps, dating platforms, and gaming services with chat functions must introduce clear terms banning child grooming, sextortion, and the non-consensual sharing of intimate images. They are also required to give users easy-to-use reporting tools and ensure complaints are acted upon promptly. Services must maintain trust and safety teams, engage with community organisations, and report significant technological changes to the eSafety Commissioner.
The reforms represent one of the most ambitious attempts to regulate digital safety in Australia. By targeting not only social media and gaming but also AI-driven chatbots, regulators are signalling that emerging technologies will be held to the same child-protection and content-safety standards as established platforms. Industry will now face the challenge of implementing these measures without undermining user experience — a balancing act that is likely to shape the country’s online safety landscape in the coming years.