One month after Australia enforced the world’s strictest social media restrictions for minors, the policy is revealing cracks in enforcement and mixed results on teen behavior. The Online Safety Amendment (Social Media Minimum Age) Act 2024, which bars under-16s from platforms like TikTok, Instagram, Facebook, Snapchat, YouTube, X, Reddit, Twitch, Kick, and Threads, kicked in on December 10, 2025. Platforms face fines up to A$50 million for systemic failures to block underage users.
Meta Platforms Inc. reported blocking over 500,000 accounts in the initial days, with nearly 550,000 removed in the first week alone, according to The Guardian. By mid-January 2026, reports emerged of almost 5 million accounts purged across major sites, per The New York Times. Yet tech giants warn the measures are driving kids to unregulated alternatives.
Early Enforcement Surge
The eSafety Commissioner, Julie Inman Grant, oversees compliance, mandating ‘reasonable steps’ like age verification without specifying digital ID mandates—despite pre-law fears. Meta’s Australian managing director, Maxine Williams, stated in a blog post that the company deployed multi-layered checks including facial age estimation and behavioral signals, as detailed in CNBC. TikTok and others followed suit, but circumvention is rampant.
Teenagers are exploiting loopholes, such as using VPNs, older siblings’ devices, or apps like Yope and Lemon8, which lack similar restrictions. A Daily Mail investigation found kids scrunching faces during biometric scans to appear wrinkled, fooling AI detectors. Posts on X highlight similar tactics, with users sharing tips on bypassing checks.
Teen Workarounds Proliferate
Cyber safety experts note minimal behavioral shifts. BBC News interviewed teens who said they’ve lost interest in some platforms but migrated elsewhere, while others reported no change. One Melbourne teen told the outlet, ‘I just use my mum’s phone or a VPN—it’s not hard.’ Parental reports vary: some praise reduced screen time, others decry isolation.
The ban prohibits parental consent exemptions, a point of contention. Tech firms like Meta urge reconsideration, arguing it pushes youth to riskier corners of the internet. In a January 12 statement, Meta noted teens flocking to ByteDance’s Lemon8 and emerging apps, per CNBC.
Platform Compliance Strains
Enforcement challenges mount as platforms invest heavily in verification tech. Snap Inc. and ByteDance Ltd. have rolled out facial recognition and credit card checks, but privacy advocates cry foul. The Digital Freedom Project’s High Court challenge, filed post-passage, questions the law’s constitutionality, citing free speech, as per Wikipedia updates through January 14, 2026.
Australian Communications Minister Michelle Rowland defended the approach, telling BBC News, ‘This is about protecting children from online harms at a critical development stage.’ Fines loom for non-compliance, yet no penalties have been issued yet. Industry insiders predict a ‘cat-and-mouse’ game with workarounds evolving faster than tech fixes.
Global Ripples Emerge
Other nations watch closely. The UK, under pressure from Labour, considers emulation, with nothing ‘off the table,’ per The Guardian. European countries like France and Germany mull similar age gates. In the U.S., states like Florida eye restrictions amid mental health debates.
Australia’s eSafety site reports initial data showing millions of accounts affected, but long-term efficacy remains unclear. Posts on X from parents laud safer homes, while teens vent frustration, calling it ‘pointless.’ One viral thread detailed a 15-year-old’s multi-app migration strategy.
Data Gaps Persist
One month in, metrics are preliminary. eSafety Commissioner Inman Grant told eSafety’s official page, updated January 6, that compliance is a ‘multi-layered process.’ Yet, a CNBC analysis reveals mixed outcomes: some teens embracing sports and reading, others doubling down on evasion.
Economically, platforms face costs in the hundreds of millions for verification infrastructure. Meta’s pushback includes calls for standardized age assurance to avoid fragmented rules. Legal experts anticipate court rulings could reshape enforcement by mid-2026.
Policy Path Forward
As challenges surface, Canberra weighs adjustments. Critics like the Australian Greens argue for education over bans, while supporters like child advocate groups demand stricter measures. Recent X sentiment shows polarized views: 60% of sampled posts praise protection, 40% decry overreach, based on trending discussions.
The ban’s true test lies in sustained impact. With nearly 5 million accounts removed, per NYT, the scale is unprecedented. Yet, if migration to unmonitored apps accelerates harms like cyberbullying, policymakers may pivot. Industry watchers await quarterly eSafety reports for harder numbers on usage drops and mental health metrics.
Lessons for Regulators


WebProNews is an iEntry Publication