8 min read

Social media age verification is "dangerous", experts warn

While we don’t yet know exactly how age verification is going to play out in Australia, placing the onus on social media platforms to implement these systems has privacy experts concerned.
Social media age verification is "dangerous", experts warn
(Source: Gerd Altmann)

In December 2024, the Australian federal government passed world-first legislation to ban under 16s from accessing social media in a move to protect children from harm online, giving platforms 12 months to effectively implement age verification systems or risk facing massive fines.

Now other places around the world are looking to pass similar legislation with some considering taking Australia’s approach of age verification.

While we don’t yet know exactly how age verification is going to play out in Australia, placing the onus on social media platforms to implement these systems has privacy experts concerned.

“There are some good arguments for restricting content by age. The dangerous part about age verification laws is how they might get implemented,” says independent security researcher, programmer, and journalist Micah Lee.

“The naive way to implement it, and the way that I imagine most jurisdictions are pushing towards, is by requiring private companies to maintain surveillance databases on millions of people.”

Experts don’t think the social media age ban will protect kids. Here’s what they suggest instead.
Many of the details surrounding the new laws remain unclear and with just four months left until the ban comes into play, experts are questioning whether it will actually protect children from harm online.

The Australian government commissioned UK-based company Age Check Verification Scheme to run technology trials for the ban’s implementation, with 53 companies including Meta and Google participating.

The trial won’t be selecting any one company to conduct age verification for Australia-based social media users and instead aims to assess the “efficacy of each technology” to “inform advice to Government and eSafety on implementation of industry codes and policy development to safeguard children online”.

Some of the technologies tested in the Age Assurance trials include facial and other biometric recognition systems, ID and digital wallet verification, and even some blockchain-based methods to verify, estimate or infer users’ ages.

Lee, formerly the Director of Information Security at The Intercept, says this kind of approach is dangerous for two reasons.

“Not only will it require a new massive database of private information that could get breached, but that exactly which content each user is viewing is getting monitored.

“It would prevent people anonymously using the internet. Anonymity is incredibly important, particularly for marginalised people.”

On 20 June 2025, the Age Assurance Technology Trial published preliminary findings and declared age verification in Australia is possible and can be “private, robust and effective”.

Two days earlier, the Australian Broadcasting Corporation reported facial biometric technologies tested in participating schools could only guess children’s ages within an 18-month range around 85% of the time, with some children being misidentified as adults in their 20s and 30s.

The trial also found service providers were “over-anticipating the eventual needs of regulators”, with some building tools for regulators, law enforcement agencies or coroners to retrace steps taken by users when verifying their ages.

“The naive way to implement it, and the way that I imagine most jurisdictions are pushing towards, is by requiring private companies to maintain surveillance databases on millions of people.” – Micah Lee

The trial’s final report was due to be presented to the government at the end of July and is expected to be released to the public sometime later this year.

If you’re questioning why such a major decision about the ban’s implementation was left until after the laws were passed, you’re not alone.

Queensland University of Technology law lecturer Dr Lisa Archbold says there are two sides to this kind of principles-based approach to regulating rapidly developing technologies.

“In this particular legislation, what social media companies need to do is show they're taking reasonable steps [to verify users’ ages]. So though having that as a benchmark can be positive in some ways because it can evolve with the technology and what that might mean… I think in this case, because of the time frame, it would have been beneficial to have thought about that aspect a little bit more so that there was more certainty.”

Dr Archbold says questions around how age verification will work – what that will look like, what the recommended technologies are or what potential guidelines from the eSafety Commissioner might be, and whether everybody will be subject to that to engage with platforms – are particularly interesting.

“I think those questions will certainly, depending on the outcome of the current trial, be very relevant questions because it will fundamentally change how we can engage in those spaces.”

But Dr Archbold also says one of the real risks of approaching online child safety by just thinking in a binary of age is that we’re not really thinking about how to make these spaces safe and free for everyone to engage with.

“Part of that is thinking about…content moderation, what is ‘fair and reasonable’ in terms of algorithms and targeted advertising or other kinds of content that’s on these platforms, and that doesn’t just affect children.

“One of the concerns I see with the ban is that it's taking focus off requirements for social media companies to take the hard steps to do those things, to make their platforms safer for everyone, including children.”

So does the root of the problem, then, lie in the way popular social media sites fundamentally operate? The short answer is yes, and according to experts, the crux of the issue comes down to privacy.

A hall of one-way mirrors

Invasive data extraction practices are the starting point of much of the harm caused to children online, according to the Electronic Frontier Foundation (EFF).

These include “the loss of personal privacy; predatory and exploitative ads that target children most vulnerable to their messaging; and discrimination resulting from consumer profiles based on a child’s gender, age, race, and the like”.

In a 2019 whitepaper titled “Behind the One-Way Mirror: A Deep Dive Into the Technology of Corporate Surveillance”, the EFF describes commercial data extraction practices as “widespread and indiscriminate”.

“Corporations have built a hall of one-way mirrors: from the inside, you can see only apps, web pages, ads, and yourself reflected by social media. But in the shadows behind the glass, trackers quietly take notes on nearly everything you do.”

Meta, the US company that owns Australia’s two most popular social media services Facebook and Instagram, is also behind one of the world’s largest online ad networks, earning 97.7% of its total revenue in 2024 through advertising.

In 2024, journalist Matilda Davies requested a copy of her Facebook account data and received almost 20,000 pages of information, “including every party invitation, holiday snap and regrettable Facebook status update, plus almost 20,000 interactions over two years with websites and apps that aren’t connected to my Meta accounts”.

“Meta knows a dizzying array of things about all of us, including some things it probably shouldn’t. My data covered just about every facet of my life," says Davies.

“This pervasive online behavioral [sic] surveillance apparatus turns our lives into open books—every mouse click and screen swipe can be tracked and then disseminated throughout the vast ad tech ecosystem,” says the EFF.

Dr Archbold says regulators need to think about how this pervasive documentation of children’s online behaviour and growing up in a world where we expect to be recorded could affect young people’s development.

“I think we're at a point where we still don't know and so I would think that because we don't know all of the harms that can eventuate, we should be thinking about this in a precautionary way and trying to think about how we can minimise the impacts on children.”

"We are feeling creatures who think. We are not thinking creatures who feel."

Social worker and therapist Keri Okanik says young people are particularly susceptible to the effects of social media companies’ surveillance-based business models, where platforms are geared toward promoting the most extreme content, like encouraging harmful and self-destructive behaviour.

“Even groups that are designed and advertised to be support groups for people experiencing things like anorexia end up being spaces where young people compete to be the most anorexic, and those kinds of spaces are rampant online,” Okanik says.

“You get those ads on Instagram and Facebook and YouTube and, all of a sudden, everywhere you engage in social media online is pulling you toward these ideas because it plays on our emotions and it takes us out of our rational thinking brain.

“That is really dangerous for people, especially because if you've grown up in an environment where you're constantly online and you're comparing your life to things happening online and not experiencing life in the real world and you interpret these extremist spaces as reality and are relating to that, then that can suck you into some really dangerous behaviour in real life.”

Okanik says social media platforms also normalise and even encourage dangerous discourse, mentioning the virality of content that encourages misogyny and violence against women popularised in recent years by influencers like Andrew Tate.

“If you think about ‘manosphere’ ideologies, right? And how, you know, young men who maybe are experiencing their first breakup ‘Google’, like, ‘how do I get over a girl who rejected me?’, and they're being fed algorithmically all of this content on why they should resent women and how women are the enemy, and that feels good, you know?

“We are feeling creatures who think. We are not thinking creatures who feel,” Okanik says, quoting neuroanatomist Jill Bolte Taylor.

“And so when we are upset and those messages feel good and they take the blame off of us or any kind of need for self-reflection away from us, we can deflect that pain into the world and we're encouraged to do so.”

Okanik says we're fed so much content that it makes it seem like these kinds of views are normal, okay and even dominant.

“Then it's really easy to corrupt people's thinking and feed those ideologies and build real danger into the real world.”

So, what now?

Age verification is happening and come December 2025, people in Australia will be required to prove their age to access designated age-restricted social media platforms, and soon others around the world may be facing similar restrictions.

To minimise the privacy concerns surrounding the technologies that could be implemented to enforce the ban, Lee suggests going with a device-based age verification method.

“Rather than using central databases of everyone to verify their ages, it's better if children's devices are configured to attest their age directly,” says Lee.

“With this sort of device-based age verification, if a 14 year old tried to access a service they weren't allowed to, their device would block it. If an adult tried to access the same service, their device would allow it.

“There would be no need for surveillance databases, and the users could remain anonymous.”

Experts don’t think the social media age ban will protect kids. Here’s what they suggest instead.
Many of the details surrounding the new laws remain unclear and with just four months left until the ban comes into play, experts are questioning whether it will actually protect children from harm online.