Meta and TikTok commit to Australia’s under-16 ban, but enforcement questions remain
Australia’s model stands out for its wide applicability across platforms and its enforcement-first approach, rather than mandating age-verification technologies upfront.
Meta and TikTok have confirmed they will comply with Australia’s new law banning users under 16 from major social media platforms, but both companies warned that the policy will be difficult to implement and may have unintended side effects.
From 10 December, platforms including Facebook, Instagram, TikTok, and YouTube must remove users aged under 16 and take ‘reasonable steps’ to prevent them from returning. Companies that fail to comply face penalties of up to AUD 49.5 million (USD 32 million). The measure is one of the most sweeping age-based social media restrictions introduced globally.
Representatives from Meta and TikTok told an Australian Senate hearing that while they would follow the law, verifying and detecting users’ ages at scale remains a significant technical and operational challenge. TikTok’s Australia policy lead warned that a strict ban could drive younger users toward less regulated platforms, noting concerns from experts that children may migrate to ‘darker corners of the internet’ with fewer protections. Meta similarly highlighted “significant new engineering and age-assurance challenges” in removing accounts at speed.
The Australian government has said companies do not need to verify every user’s identity, but must demonstrate proactive efforts to identify and disable underage accounts. That middle-ground approach has been criticised by industry voices, who argue it is both vague and difficult to operationalise. YouTube has described the legislation as well-intentioned but structurally flawed, while other services such as WhatsApp, Twitch, and Roblox may also fall under the scope of the ban.
Broader context: a global push for youth online safety
Australia’s move comes amid intensifying international debate about online safety for minors, mental health, and platform accountability. Several jurisdictions, including the EU, the UK, and US states, have explored stricter age-verification and youth-protection laws; however, enforcement remains a recurring challenge.
Australia’s model stands out for its wide applicability across platforms and its enforcement-first approach, rather than mandating age-verification technologies upfront. Critics argue this risks creating legal uncertainty without guaranteeing meaningful safety improvements. Supporters counter that voluntary industry efforts have not been sufficient to protect minors online, and say regulatory pressure is necessary to accelerate platform safeguards.
Key issue: safety vs. privacy vs. feasibility
The central policy dilemma mirrors broader global debates: how to protect young users without over-collecting personal data, forcing intrusive age-checks, or pushing children toward less safe digital spaces. It also raises technical questions about what constitutes ‘reasonable steps’ for detection, and whether automated systems can distinguish minors from adults reliably and fairly.
As Australia prepares to enforce the ban, the rollout will serve as a test case for other governments weighing similar measures. Whether the approach ultimately delivers meaningful safety improvements, or simply shifts risks elsewhere, will likely shape next-generation child-protection legislation around the world.
