With nearly all teenagers in the UK currently using some form of social media, it’s estimated that number sits at 95%, should companies like Facebook, Instagram and Snapchat be doing more to keep young people safe through improved social media regulation?
While being online of course, connects us more than ever, with that comes inevitable risks if young people aren’t learning how to keep themselves safe.
Just last week, the BBC reported that UK MPs are urging better regulation of social media to protect children. Alongside this, England’s Children’s Commissioner, Anne Longfield, has said that children “remain an afterthought” for leading social media companies.
So, this Safer Internet Day, we’re here to explore the regulations that currently exist on kids’ most popular social media channels and why they need to be improved.
If you take a read of the Terms of Services of Instagram, Snapchat and Facebook (the top three most popular social channels among UK teens), very few are aimed specifically at protecting under 18’s.
The only rule we could find to protect young people was the one that states ‘users must not be under the age of thirteen’. This is decided by the date of birth given at the registration stage of setting up an account. Each of the three platforms mentioned will prohibit the creation of an account for anyone who enters a DOB that shows they’re under 13 years of age. It is also possible to report any accounts that you think are from underage users.
If you allow a child under your care to use a platform for which they are too young, the social media companies can’t do anything about it. However, when we look at the regulations to protect 13-17 year-olds, legitimate and allowed users of these sites, there are still none that set them apart from adult users.
With 99% of 13-17 year olds found to use social media at least once a week – that’s nearly 7 million teenagers – do we need further regulations put in place to protect these young people?
According to the UK Council for Child Internet Safety (UKCCIS), there are three categories to define online threats:
All of the big social channels are doing the basics when it comes to stifling threatening behaviour. Facebook, Instagram and Snapchat all have a conduct code and a reporting system through which users can report content and users. They also have blocking features to block any unwanted contact.
However, as a recent case in the UK has brought into the spotlight – where a schoolgirl committed suicide after viewing distressing content on Instagram – more must be done to regulate content.
On Instagram, you can’t directly search for distressing hashtags, but there’s no algorithm to stop these accounts from being created and you aren’t stopped from following them once you’ve found them. Even more so, once a person has started showing an interest in certain topics, Instagram’s algorithms show them more of the same. These preference-driven algorithms are common across all social media channels, meaning that if a young person views or follows inappropriate content once, it can escalate quickly and take over their feed.
Facebook has recently pledged to do more to protect young people online, and hopefully the company will follow through on this promise and set a new precedent. As tech inevitably finds its way into the youngest hands of society, it’s everyone’s responsibility to protect each other, and social media companies must join this fight in full force.
Make the internet a safer space for your family with these top tips.