LIVE TV
LIVE TV
LIVE TV
Home > World > Explained: Why Is Australia Banning Under-16s From Social Media? Step-by-Step Breakdown Of How The 16+ Rule Will Work

Explained: Why Is Australia Banning Under-16s From Social Media? Step-by-Step Breakdown Of How The 16+ Rule Will Work

Australia’s new Online Safety Amendment Act 2024 mandates that major social media platforms block users under 16 from creating or keeping accounts starting December 10, 2025. The law targets platforms focused on user interaction, aiming to strengthen child safety online.

Published By: Ashish Kumar Singh
Last updated: December 7, 2025 17:02:32 IST

Add NewsX As A Trusted Source

In late 2024, Australia enacted the Online Safety Amendment (Social Media Minimum Age) Act 2024, an Australian world-first law, requiring users of most large social media companies to be at least 16 years old. 

On 10 December 2025, social-media platforms that satisfy some criteria (generally, those which serve as a social interaction platform rather than a means of social interaction) are required to take measures which are reasonable in preventing any Australian under the age of 16 from having an account on their service, under this law. 

The prohibition is not limited to the new account creation and extends to the existing ones that were already possessed by the users who are under 16. 

What platforms are impacted and which are not?

Social media that will be included in the ban are those whose primary use is to facilitate interaction between the users, which is posting, sharing, commenting, etc. 

Among the names on the list are: 

Facebook

Instagram

TikTok

Snapchat

X (formerly Twitter)

YouTube

Reddit, Twitch, Kick, Threads etc. 

However, numerous messaging applications, games, education applications, and other services are an exception since it is not their main purpose to socialize. Some of them are WhatsApp, Discord, Steam (and Steam Chat), gaming platforms such as Roblox, learning platforms such as Google Classroom.  

Therefore, the law is not the prohibition of all internet-based interaction – it is focused on mainstream social media. 

What are reasonable steps to enforce the ban?

Importantly, the legislation does not indicate precisely how platforms have to confirm the age of a user, rather, it places it in their hands provided that this approach is considered to be a reasonable measure. 

Some of the methods of age-assurance under consideration, or under trial, are:

Posting an ID issued by the government (passport, driver licence, etc.) to a third-party verifier.

Biometric or face-scan authentication (e.g. a video selfie) based on age-determining technology.

Other plausible alternatives  such as cross-checking email, device information, or already validated profiles. 

Nevertheless, and this is essential, the legislation does not allow the websites to compel everyone to post a government identifiable or utilize an official government digital identifiable system. In case they attempted to do so as a compulsion, it would collide against the privacy provisions that had been incorporated in the legislation. 

So platforms should have a way of guaranteeing age-assurance but in a manner that is privacy respectful and provides users with sensible options. 

Most platforms have a review or appeal area in which they are supposed to accept the fact that a person was unfairly detected to be under 16. They also need to provide under-16s (or their parents) with a means by which they can download and save their data such as photos, contacts, messages correctly before their account is deactivated. 

Noncompliance fines: Who takes the cost? 

Unless a social media company takes reasonable steps to prevent under-16s using their accounts/ systematically enabling the under-16s to remain on their facility, they may be fined up to AUD 49.5 million (≈ US $33 million). 

To clarify, under-16 users (and their parents/carers) are not punished in the law in case they are caught possessing an account. The platforms are the ones that have to bear the legal burden and not the children themselves. 

Why are we doing this, Australia? What is the aim?

The primary reasons are to enhance the mental health, well-being and safety of the youth according to the government and regulators. According to them, exposure to social media is likely to pose more risks, especially when it happens during the young child and adolescent ages: addiction, cyberbullying, social pressure, exposure to harmful content, privacy problems and effects on mental health. 

This means that by holding off exposure until they are older than 16, the youth has more time to build emotionally, socially, and technologically literate, and then navigate a complicated social-media landscape. 

The supporters do not look at the law as a permanent ban, but as a delay -a temporary action, so that children have more time before they can access the web in leading social media outlets. 
eSafety Commissioner

Caution, criticism and open remarks

Although there is an ambition, there are still actual concerns concerning the effectiveness of the ban:

Since the law does not specify a particular means of age-verification, platforms are not bound which can create inconsistencies in the implementation or gaps. 

Technologies to ensure age, such as face-scanning or biometric estimation, can form false identifications on borderline teens (1517), whose looks may change according to ethnicity, lighting, or image quality. 

There is a concern that minors may merely shift to platforms that are less controlled, more chat-based or even gaming/social-gaming environments which may be even more difficult to control or oversee. 

Privacy is also an issue: to pass age verification, sites have to work with personal data in a sensible manner. The legislation requires safeguards, yet there is criticism that it would be abused. 

Lastly, social media to many teens represents an expression, connection, creative work and support. The ban is viewed as the limitation of the youth voices and the freedom of expression. 

What is still uncertain?

At this point (late 2025) several things are not yet clear:

Trial age-check programmes have been conducted but regulators have not enforced one method of age-check.

Platforms decide how their way is done – and regulators will only determine whether it is reasonable. 

It is also not clear how fast under-16 accounts will be discovered and disabled – and how many under-16 users will attempt workarounds (forged birth dates, VPNs, IDs of older family members, etc.). 

The law prohibits no access to publicly available content (not requiring any user authentication) by the under-16s – thus they can still be exposed to some social media. 

Bottom line: An untested social experiment

The new social media-minimum-age regulation in Australia is a bold and innovative one. It expects to transform the current sign-up and trust into a signature that goes hand in hand with safety and mental health on social media with placing responsibility on platforms and establishing severe punishment in case of non-compliance.

Its success will, however, greatly rely upon the degree to which age-verification systems are implemented on platforms in a realistic and responsible manner, their adherence to regulation, and how the law can be changed in response to unintended consequences (privacy concerns, platform migration, etc.).

By December 2025 the world will be paying close attention the world is looking at it; many other nations are planning to do the same. It is unclear as yet whether this would become a template of international standards, or a lesson of what not to be done in the digital age.

ALSO READ: What Really Happened At Pearl Harbour? Inside Japan’s Secret Plan To Strike The US As Fateful Event Marks 84 Years

RELATED News

LATEST NEWS