[Source: Reuters]
Australia’s government carved out an exemption for YouTube when it passed laws banning social media access for children under 16, but some mental health and extremism experts say the video-sharing website exposes them to addictive and harmful content.
Australia will block video-sharing platforms TikTok and Snapchat, (SNAP.N), opens new tab Meta-owned (META.O), opens new tab Instagram and Facebook and Elon Musk’s X for minors by the end of 2025, forcing them to impose strict age restrictions on access or face hefty fines. At the same time, the government will keep Alphabet-owned (GOOGL.O), opens new tab YouTube open for all ages because it is a valuable educational tool and not “a core social media application”.
The initial ban was meant to include YouTube but after hearing from company executives and children’s content creators who use the site, the government granted an exemption.
The landmark legislation passed in November sets some of the world’s most stringent social media limits.
However, six extremism and mental health researchers interviewed by Reuters say the exemption undermines Australia’s main goal of protecting young users from harmful content.
Surveys show YouTube is the country’s most popular social media website among teenagers, used by 9 in 10 Australians aged 12-17.
The academics interviewed by Reuters said that it hosts the same sort of dangerous content as the prohibited sites.
Helen Young, a member of the Addressing Violent Extremism and Radicalisation to Terrorism Network, echoed those concerns, saying YouTube’s “algorithm feeds really far-right material, whether it’s primarily racist or primarily sort of misogynist, anti-feminist stuff, to users that it identifies as young men and boys.”
The academics interviewed by Reuters acknowledged that all social media platforms struggle to control the flow of harmful content but questioned why the country’s most popular site was given an exception.
When asked about these criticisms, a YouTube spokesperson said the platform promoted content that met quality principles such as encouraging respect while limiting “repeated recommendations of content that, while innocuous in a single view, can be potentially problematic if viewed in repetition for some young viewers”.
In addition, YouTube has said in public online statements that its moderation is getting more aggressive and that it has broadened its definition of harmful content which will be picked up by its automated detection system.