Experts don't think the social media age ban will protect kids. Here's what they suggest instead.
In December 2025, Australia will be kicking under-16s off social media sites, being the first of several nations to consider a social media age ban to address online safety concerns.
In a media release announcing the ban’s introduction to Parliament in November last year, the Albanese government said the laws “will deliver greater protections for young Australians during critical stages of their development”.
“This legislation will go a long way to providing that support and creating a new normal in the community around what age is okay to use social media,” said then Minister for Communications Michelle Rowland.
“Keeping children safe – wherever they are – is a collective responsibility, and the Albanese Government is stepping up to play our role.”
It took only eight days from when the bill was first introduced to Parliament for both major parties to vote in favour of the proposed laws. In a rush to push several pieces of legislation through in Parliament’s last sitting week of the year, the government left the community with just one day to make submissions to the Bill’s senate inquiry.
Around 15,000 submissions were made to the inquiry in November last year despite the limited 24 hour time frame, with many expressing dissent to the proposed, now passed, laws, and many of the details surrounding the new laws remain unclear.
With just four months left until the ban comes into play, experts are questioning whether it will actually protect children from harm online.

But first, who's in and who's out?
Queensland University of Technology law lecturer Dr Lisa Archbold says a key area of concern with these laws is the ability to specify which online platforms and services will be included in the ban.
In short, if you can post, interact with or privately communicate with other people on a platform, chances are it’ll fall under the scope of the ban. This won’t just mean major services like Facebook, Instagram, Snapchat, TikTok, Reddit and X, but any smaller platforms that could fall under this definition.
YouTube, which was originally going to be exempt from the ban, has now also joined the list of age-restricted platforms, and search engines like those operated by Google and Microsoft will require Australia-based users to verify their ages to access their services while logged into their accounts.
Private messaging apps, online gaming platforms, and other platforms or services with the “primary purpose of supporting the health and education of end-users” are exempt from the ban for now.
The laws confer personal rule-making powers on the Minister for Communications (currently Anika Wells MP) to enable flexibility in applying this wide definition to existing and emerging technologies and services, bypassing the much slower decision-making process of amending legislation through Parliament. The Minister must seek advice from the eSafety and Information commissioners (and “have regard to that advice”) before exercising this power.
“Because of the nature of this piece of legislation, it would have been good to have more certainty around that particular aspect before it was enacted,” says Dr Archbold.
Can these laws actually protect children from harm online?
Probably not, and according to experts, enforcement of the ban is going to present a challenge.
“Kids are incredibly smart,” says social worker and therapist Keri Okanik. “Everyone under the age of 30 probably knows how to use a VPN [Virtual Private Network] and get around some of those mechanisms for enforcing these kinds of policies in the first place.”
That being said, the onus is on platforms to adhere to the new laws and so while children and parents won’t face consequences for getting around the ban, it’s unlikely online platforms will take much of a hit either.
“So if you're thinking about a smaller platform, there might be enforcement issues as to whether any particular infringements get picked up or get taken forward via an enforcement route,” says Dr Archbold.
“Similarly, if it's a larger platform, there might be issues with whether the fines are a sufficient deterrent.”
Social media companies could face fines of up to almost $50 million for failing to prevent children from creating accounts on their platforms, but with the sheer amount of profit earned by major social media companies, it takes mere days or weeks for companies to earn enough to cover a year worth of fines.
Even if the ban is successfully enforced on platforms caught by the incoming laws, Dr Archbold says children’s safety may be even more at risk.
“One of the concerns I have with the ban is that it will move children and young people to these platforms that we don't have as much visibility on, or aren't as worried about compliance, so that's going to be a challenge.”
Is there a more effective way to address online safety?
What Dr Archbold does support is having a better internet for everyone.
“I think that's the starting point…The status quo is not good enough. We need to have better regulations in the digital space to ensure that the platforms our children and young people are going on are used for good and beneficial purposes.”
Proposed law reforms
One of the most important parts of children’s rights in digital spaces is ensuring they have voice and agency in relation to matters that affect them, and while there was some consultation conducted with children in relation to the ban, Dr Archbold thinks it could have been done better.
“There were certainly more opportunities where children and young people could have been involved and certainly, given the constrained time frames in which it was implemented, I don't think that there was adequate consultation with the public and experts and advocates at large that could have raised these important issues.
“I think to give this kind of policy the time it needs to be properly considered, further consultation should really have been done.”
Dr Archbold says there are some proposed reforms to Australia’s privacy framework which would better address children’s safety than a ban. One of these is ensuring a fair and reasonable test for the collection, use and disclosure of information, and having one of the relevant factors to determine that test as whether it's in the best interest of the child.
“So, really, embedding children's rights principles into our current laws, I think, is a really good first step.”
Proposed reforms to the Online Safety Act, the legislation where the social media ban is placed, include a digital duty of care for social media.
“Another key area of reform I think we do need is…having that higher threshold and a fiduciary duty for social media companies and digital platforms to, again, ensure that they are having a duty of care to everyone, including children and young people,” says Dr Archbold.
Dr Archbold believes these proposed reforms would better ensure that social media companies are more equipped to make digital spaces safer for children.
“I think that they’re better than resorting to a ban which doesn't really address these sort of more fundamental issues across the board, of all the places that children and young people interact online.”
Digital and media literacy
By just putting in blanket age restrictions, Okanik, who works primarily with children and young adults, says we’re not solving the underlying problem of media and digital illiteracy. “And I think that’s huge,” she adds.
“The way that we educate people on how to be safe and use online spaces in an effective way doesn’t exist, so delaying access doesn’t actually prepare kids for engaging in the online world.
“I think that because we don’t have that digital literacy built into education, there are huge safety and protection concerns for young people engaging online. That is anything from accessing inappropriate content to cyberbullying to risking child exploitation, being exposed to predatory behaviour, grooming, all of those kinds of things that happen online, and young people are at serious risk for that.
"I think that sometimes having early guided exposure helps them to develop those critical skills and understanding so that their engagement within social media is healthier and more prosocial."
Dr Archbold says this is a key part of what we need to do to ensure children can protect themselves in online spaces.
“It's not just about having laws and regulations. We also need to ensure that children and young people understand how to engage in the space.”
Decentralisation and interoperability
Okanik doesn't think privately owned social media platforms have "adequate protections in those spaces to keep kids away from unsafe content online".
“The biggest risk is that when private companies own these spaces in which people interact, they have a lot of influence and control over how those spaces are moderated and facilitated."
The Electronic Frontier Foundation (EFF) says interoperability, where social media networks allow users to connect and communicate with others across different platforms using a set of protocols is “a key policy for a pro-competitive Internet”.
“Interoperability undermines network effects that keep users locked into a conglomerate’s ecosystem. It removes barriers for new entrants by letting small players piggyback on the infrastructure developed by big ones.”
Since the social media ban first passed in December last year, concerns for the future of online safety and privacy in general are particularly heightened following the alignment of tech executives behind US President Donald Trump at the commencement of his second presidential term in January 2025.
Security researcher, journalist and member of Lockdown Collective Micah Lee says it’s now more important than ever for everyone to move away from internet services provided by US-based 'Big Tech' companies and “instead build out local infrastructure based on open source technologies”.
“A great thing about FOSS [free and open-source software] is that so much software already exists that can be used as alternatives to the current Big Tech platforms without needing to build them from scratch."
Decentralised social networks like the Fediverse are steadily gaining popularity, but Lee, whose collective develops an open-source app for migrating X (formerly Twitter) account data to interoperable platform Bluesky, says it’s too early to tell if they’ll ultimately win out against corporate platforms.
“The main issue, in my opinion, is usability and polish. It's just so much simpler to install the TikTok app, make an account, and start scrolling, than it is to even pick out a Mastodon server.”
Okanik also supports a shift to decentralised social networking but says part of the resistance to moving to these spaces is because many people are already locked into existing ecosystems.
"The perception is, 'I have to exist here or I don't exist', right?
"Our identity has become not only who we are as a person, but who we are as this pseudo-person online. And I think that it's really important that we separate that and don't see that as our identity and use social media in a very different way."
While the privacy harms caused by social media giants are huge and well-documented, interoperable social networking also comes with privacy and security risks which the EFF says need to be carefully considered.
“More interoperability also means companies have new ways to share and collect personal information. This is an argument that the tech monopolies have themselves presented in defense [sic] of their behavior [sic], and as part of a promise to behave better in the future.”
Even still, the EFF says the dangers of online data-sharing highlight a need for better privacy law.
“In a world where user consent and purpose minimization [sic] are properly defined and vigorously protected, most of the concerns with new interoperability programs would be moot.
“More importantly, the idea that we must rely on platforms’ good will for our protection would seem rightfully absurd.”



Member discussion