When it comes to the inherent mental health risks associated with using social media, alarm bells have been ringing for quite some time. From potentially viewing disturbing content, to the anxiety and depression that can arise when falling into the trap of social comparison, doomscrolling and online bullying – there’s a pressing need to address the many issues that can render these platforms hazardous to our health.

Which is why it wasn’t surprising to hear U.S Surgeon General Vivek Murthy weighing in on the matter with some recent advice on the matter. After all, the Surgeon General’s office has been issuing reports and slapping warning labels on other public health hazards, like smoking, since at least the 1960s. In his Advisory Report released last week, Murthy outlined the key risks to kids and teens that may arise from prolonged social media use and exposure to harmful content. Pulling from prior research, the report detailed how young people were at risk of depression, anxiety and suicidal ideation.

These findings correspond with a troubling (and powerful) catalyst, namely the addictive-by-design nature of social media itself. In detail, the report shone a spotlight on how the apps and services are engineered to drive user engagement through persistent notification systems, infinite-scroll and other mechanics that can lead to excessive use and cause disruption to cognitive functioning. A fact not lost on young users of social platforms. Nearly 75% of teenagers believed they were being manipulated by the tech companies behind these platforms to spend more time using their services.

Although thirteen is widely considered the minimum age required to use social platforms in the U.S, nearly 40% of children aged 8-12 are on social media.

Just how rampant are these issues? That corresponds with how widespread social media use is. The report notes that although thirteen is widely considered the minimum age required to use social platforms in the U.S, nearly 40% of children aged 8-12 are on social media.

Murthy’s recommendation is to act swiftly, offering several recommendations for policy makers, parents, educators, tech companies and users themselves. Many of these suggestions are long overdue, such as investment in further research, targeted policy by lawmakers and action from social networks… but is it enough, and how long will we have to wait before these recommendations are put into effect?

Could there be a more agile solution, one that better addresses the seemingly causal relationship between social media and negative mental health outcomes in young people?

Why a ban on social media won’t work

First-off, we need to address the elephant in the room. I’m not advocating for a ban on social media, and neither is Murthy, although that approach appears to be gathering momentum of late. U.S policy makers are mulling over legislation that would see teens under 16 prohibited from using social media at all and the state of Montana just banned TikTok outright. Sure, some of these policies are focused more on geopolitical security concerns, like the growing number of countries banning TikTok on government devices, but let’s still play this one out and see where we land.

First off, banning anything just serves to make it more alluring, especially to rebellious teenagers. Look to history to see how successful a ban on what has been decreed harmful to the youth of the nation. Underage drinking, smoking, access to explicit material, violent videogames… there is always a way to circumvent it, or a person who is ready to step in and enable someone, for the right price.

That’s without factoring in that social media is a product of technology, and technologies can be developed that defeat other technologies. Case in point; virtual private networks, the dark web, jailbreaking devices, side-loading apps, hacking, alt and ‘burner’ accounts and myriad other ways to thwart the best laid plans.

Woman checks her social media on a mobile phone while waiting at the train station.
A ban on social media will be ineffective and difficult to implement. Credit: Daria Nepriakhina/Unsplash.

The challenge of fixing this

Let’s examine the options here. The way I see it, there are two paths that can be taken. The first is to treat social media like cigarettes or alcohol and introduce policy that bans the use of these platforms by persons under a certain age. However, this introduces several other challenges that would threaten the viability of this proposed solution.

For example, by what means would you enforce age-gating on social platforms? If we follow pattern of other online services that utilise identity-checking mechanisms, like government services or job recruitment platforms, one could imagine that a user would need to deliver additional personal documentation to the social networks for age-verification purposes. I can only imagine the sheer outcry from the general public if they had to fork over any more sensitive info, not to mention the security risks and threat of that information being sold, hacked or stolen at some point. You would need the equivalent of the GDPR at a global scale to also afford even basic assurances that information wouldn’t be abused by tech companies or commercial actors.

Another option would be to forgo government intervention and instead ask parents to further monitor their children’s social media usage. Basically, turn the world into a helicopter-parent haven, forcing uncomfortable conversations and straining the relations between parent and child the world over in the name of protective, affirmative action. No, I can’t see that one working out either, especially with the ubiquity of personal devices, internet access and availability of social platforms. Plus, how can you expect parents, who are already dealing with a looming global recession, threat of redundancy thanks to AI, international tensions and the echos of a pandemic to somehow remain focused on whether their child is scrolling through TikTok right now?

Why would you even want to ban social media?

Social platforms play an important role in our daily life. Ever since the early days of services like MySpace and Friendster, social media has helped to connect people the world over. Sure, things have mutated somewhat over the years and our concerns have shifted, but in retrospect it’s easy to see that there were troubles even back then.

In many ways, platforms became the de facto way to communicate. They have replaced the text msg, the phone call, the mall hang.

The politics, anxiety and fallout of who would be listed in your Top Friends (a public ranking of your closest contacts, viewable on your MySpace profile), arguments over what “relationship status” to put on your Facebook profile and the growing concerns over privacy that were just starting to bubble up.

In many ways, and especially for the younger gens, social platforms became the de facto way to communicate. They have replaced the text msg, the phone call, the mall hang. The same was true for me growing up in the late nineties and early 2000’s, where you’d use a patchwork of disparate systems to communicate to your friends in place of social media. Schooltime was a given, but outside of that there was the newfangled tech of text and mobile calls, MSN messenger, ICQ, mIRC, blog comments and MySpace. And if you wanted to send a meme, there was email too.

When the social networks arrived, we initially thought they were just another service, but soon they became a digital hub, displacing the other technologies with every new feature update. I recall the day Facebook Chat arrived, coming home to see a little chat window at the bottom of my screen and the flurry of messages I received to confirm its existence on all channels of communication – it coincidentally was also the day MSN Messenger died.

Soon, social media was where everything happened, where we’d upload memories and download joy. There was undoubtedly an air of curation that would only grow as the years went on, but for many users it is still the place where you can share your adventures with the world, moments of happiness and achievements.

Once everyone became connected, these platforms also allowed us to venture out beyond our social groups and find new people, places and things to discover. It became (and still is) and awesome way to see the world from different perspectives, to learn new cultures and experience life from angles you normally wouldn’t. For minorities, there is also the element of representation, where you can seek out and find people, groups and entire communities just like you. You may not feel seen, heard or represented in daily life, but at least there is a place where you can transcend that, which can then lead to even better real-life experiences, a sense of belonging and connection to new friends the world over.

There’s something special about that.

And during our most formative years, when we are learning not only about the world but who we are as people, having access to different perspectives is an absolute must. So too is being able to communicate with our peer groups, share knowledge and dabble in creative expression.

Beyond a ban, here’s an alternative solution

Banning social media could arguably do more harm than good, but there is still the lingering issue of its negative impact on young people. In seeking a solution, let’s revisit regulation, but with a twist. Instead of the government mandating that social platforms enforce strict age-gating, what if they are compelled to warn future users of the potential dangers of using their product?

I am drawing inspiration from real world social experiences here. When we sign up to college, a new job or somewhere that will see us interact with people on a large scale, we usually are provided some sort of training as part of our onboarding experience. Sometimes that is just watching a video on company culture or being made aware of certain policies. But there are also those moments where we are made to view mandatory occupational health and safety training media. You know the type – what to do if there is a fire, bend with your knees (not with your back), etc.

We prep people on how to mix with others, and what to do if there is an environmental hazard, so why not do the same for potential new users of social platforms? How about mandating these social media companies to develop non-skippable shortform video content that is required viewing for signup?

Be proactive and regularly remind users of how long they have been in the app, or similar wellbeing messages when they reach a scroll or check-in threshold.

Best of all, most of this work has already been done. These companies would no doubt have their own media on the dangers of workplace harassment, bullying, racism and the like. These are subjects that are showing up time and again in the research, so why not educate users on the potential pitfalls of these services. If it’s important for your employees, it’s going to land for users of the service too. Teach teens the dangers of social comparison, how people’s feeds are usually not reflective of reality, the psychology that compels people to curate their digital profiles to only show their “best life” and the like. Warn them of the methods they will use in an attempt to hook them in and keep them glued to their screens. Or better yet, be even more proactive and regularly remind users of how long they have been in the app, or similar wellbeing messages when they reach a scroll or check-in threshold.

You’d think this would all be an easy sell. After all, the Surgeon General loves those warning labels. Time to put them to work.

A strategy like this, be it a one-time educational event or repeated on an annual cadence, could be much easier to get over the line, and effective at a grassroots level. Every piece of the puzzle, except the policy to execute, is already there – the research, the content, the technology and the need to employ it. Plus, every single platform is now prioritising short-form video… the solution is right there in front of us.

Mental health is not something to be taken for granted, or trivialised, especially when it comes to young people. Yet, somewhere along the line, our approach became too reductive and there’s a real possibility that social media will soon be treated like an illicit substance, when it is anything but. By educating young people on the potential dangers of social media we can help them enjoy its benefits in a safer way. Access to information by passing along what we, the government, the platforms, and the researchers already know, not imposing blanket bans or age gates.

It’s clear from the growing body of research, bolstered by the Surgeon General’s report, that something needs to be done to protect the health and wellbeing of young people online. Time is of the essence, and compelling a dozen platforms to pre-emptively educate their users would be a much easier battle than convincing every jurisdiction and governing body around the world to impose restrictions that are ineffective, developmentally limiting, and easy to circumvent.

Stay in the loop

Subscribe to Limelight, our monthly newsletter packed with analysis, insights and resources to help you live a happier and healthier life.