Australian Youth Effortlessly Bypass Social Media Age Restrictions, Report Finds
Introduction
A report released by the eSafety Commissioner, Australia’s leading online safety authority, has found that children and young people in Australia are easily circumventing social media age restrictions. The research highlights the inadequacy and weak enforcement of minimum age requirements on these platforms, allowing underage users to access them with ease.
Self-Reporting and Weak Verification
Social media platforms heavily rely on self-reported information from children and young people to enforce age restrictions. In addition to age declarations, apps like Snapchat, TikTok, and Twitch employ voice analysis technologies to detect users under 16.
However, the report cautions that these measures are insufficient. Children can simply enter false ages or birthdates, rendering the Plattformen unaware of the true number of underage users. Julie Inman Grant, eSafety Commissioner, emphasized the need for more stringent enforcement of age verification by social media providers.
Lack of Deterrence and Inadequate Security
The report further highlights the lack of deterrents preventing minors from signing up for social media accounts. Additionally, security measures for young people who are of legal age to use the platforms but not yet adults are often not set at a high level by default. This creates a vulnerability that can be exploited by malicious actors.
Call for Collaborative Action
Grant stressed that addressing this issue requires a collective effort, involving not only social media providers but also parents, educators, policymakers, and technology developers. She emphasized the need for stronger partnerships and a shared commitment to creating safer digital spaces for children and young people.
Specific Platform Findings
The report provides detailed insights into the age verification practices of individual social media platforms:
- Snapchat: Relies on self-reporting with no additional verification mechanisms.
- TikTok: Uses a combination of self-reporting, voice analysis, and image recognition, but has been criticized for ineffective age verification.
- Instagram: Requires users to provide a birthdate and uses artificial intelligence to detect underage users, but has faced allegations of lax enforcement.
- YouTube: Collects age information but has no specific age verification process.
- Discord: Designed for users aged 13 and up, but allows users to bypass age restrictions by creating accounts with false information.
Recommendations for Improvement
The report concludes with several recommendations for improving age verification on social media platforms:
- Strengthen Verification Methods: Implement more robust age verification methods, such as identity verification through government-issued documents or facial recognition.
- Default to Strict Security Settings: Set high-level privacy and security settings for underage users by default, to protect them from inappropriate content and interactions.
- Educate Parents and Young People: Provide clear guidance to parents and young people about the risks of underage social media use and how to navigate the platforms safely.
- Support Research and Innovation: Invest in research and development of new technologies and strategies to improve age verification and protect children online.
Ongoing Evolution
The report acknowledges that the digital landscape is constantly evolving, and so are the challenges associated with age verification. It calls for ongoing monitoring and evaluation of platform practices, with regular updates to ensure that social media remains a safe and age-appropriate space for young people.