Australia expands age verification rules to AI chatbots, app stores and online platforms

Australia has introduced new age assurance requirements covering AI chatbots, app stores, gaming platforms and pornography sites as part of broader efforts to protect minors online.

Australia expands age verification rules to AI chatbots, app stores and online platforms

Australia has expanded its age verification framework for online services, introducing new rules that require platforms hosting age-restricted content to verify the age of users. The move builds on the country’s earlier ban on social media accounts for users under 16, creating one of the most extensive age assurance regimes currently in place.

Under the Age-Restricted Material Codes, which took effect on 9 March 2026, platforms hosting content such as pornography, high-impact violence, suicide-related material or other harmful content must implement age verification measures. These may include tools such as facial age estimation, digital identity wallets, or photo identification checks.

The requirements extend beyond social media and adult websites. AI chatbots capable of generating explicit or harmful content, app stores, online gaming platforms, and certain messaging services must now ensure that users meet minimum age requirements before accessing restricted material. App stores must also prevent minors from downloading or purchasing applications rated for adults.

Search engines will implement additional safeguards by blurring explicit or violent content for users who are not logged in or who are identified as under 18, while adults will be able to access unblurred results unless they choose otherwise.

The new regulations have prompted mixed reactions from industry. Adult content company Aylo, which operates sites including Pornhub, has chosen to block Australian users entirely rather than implement age-verification systems on its free platforms. At the same time, several major AI chatbot providers have introduced age-assurance systems or content filters to comply with the new requirements.

Australia’s eSafety Commissioner says the policy aims to shift responsibility for protecting minors onto technology companies. Authorities are also reviewing how the country’s broader Social Media Minimum Age law is working in practice as part of ongoing efforts to reduce online harms affecting young users.

Go to Top