In a move to protect young people from online risks, Australia has added Twitch, the live-streaming service popular with gamers, to its upcoming social media ban for users under 16, which takes effect in December. This inclusion aims to curb exposure to harmful content and interactive pressures on minors, aligning with the government’s broader digital safety initiatives.
Australia’s eSafety Commissioner, Julie Inman Grant, revealed on Friday that Twitch would be included in the nation’s pioneering social media ban, set to commence on December 10. The announcement, made less than three weeks before the deadline, adds Twitch to a roster of platforms like Facebook, Instagram, TikTok, and Snapchat that must ensure under-16s cannot open accounts and must close existing ones. This decision underscores the regulator’s proactive approach to adapting the ban’s scope as new platforms gain prominence among youth. The timing highlights the ongoing assessments by authorities to address evolving digital landscapes effectively.
The inclusion of Twitch was based on its primary function of facilitating online social interaction through live streaming and user chats, rather than solely gaming. Commissioner Inman Grant stated that Twitch enables users, including Australian children, to interact with others in relation to posted content, fitting the ban’s criteria for platforms designed around social engagement. This distinction is crucial, as the law exempts services with a sole or primary purpose of gaming, but Twitch’s features promote real-time communication and community building. By focusing on interaction-driven platforms, the regulator aims to mitigate risks like cyberbullying and inappropriate content exposure.
Implementation details outline that from December 10, no new Twitch accounts can be created by users under 16 in Australia, and by January 9, 2026, existing accounts held by minors will be deactivated. A Twitch spokesperson confirmed these steps, noting that the platform already prohibits users under 13 globally and requires parental supervision for those aged 13 to adulthood in their country. This phased approach allows for a smoother transition, giving companies and users time to comply without immediate disruption. It also reflects the practical challenges tech firms face in verifying ages and enforcing such bans across diverse user bases.
Contrastingly, Pinterest was exempted from the ban because its core purpose is inspiration and idea curation through image boards, not social interaction. This exemption underscores the regulator’s focus on platforms where user-to-user communication is central, as highlighted in the eSafety commissioner’s assessment. By sparing Pinterest, the authorities acknowledge that not all digital services pose the same level of risk, allowing for a more targeted application of the ban. This nuanced approach helps balance safety concerns with the benefits of creative and educational online tools.
The social media ban, hailed as a world-first, requires companies to take “reasonable steps” to prevent under-16s from using their platforms or face penalties of up to $49.5 million. Communications Minister Anika Wells emphasized that the law aims to give Australian children a “reprieve from the persuasive pull of platforms,” addressing concerns about harmful content and mental health impacts. This legislative effort follows extensive debates on the role of social media in youth development, with the government prioritizing child welfare over industry interests. The fines serve as a strong deterrent, pushing tech giants to invest in robust age-verification systems.
Political criticism has surfaced, with Shadow Communications Minister Melissa McIntosh labeling the late addition of Twitch as “sloppy, last-minute policy work.” She expressed alarm that such changes are occurring just weeks before enforcement, suggesting it could confuse parents and undermine the ban’s effectiveness. This backlash highlights the tensions in implementing sweeping regulations, with opponents arguing for more foresight and stakeholder consultation. However, supporters maintain that adaptability is necessary to keep pace with rapid technological changes and protect vulnerable users.
Enforcement mechanisms may involve age verification technologies such as government ID checks, facial or voice recognition, and age inference based on online behavior. In a proactive step, Meta announced it would begin closing under-16 accounts on its platforms, including Instagram and Facebook, from December 4, ahead of the official ban date. This early action by a major player signals industry readiness but also raises questions about consistency and privacy concerns. As companies explore these methods, the effectiveness and ethical implications of digital age gates will be closely monitored.
As the December 10 deadline approaches, the ban represents a significant experiment in digital governance, with potential implications for global social media regulation. However, challenges remain, including how to effectively enforce the rules and whether children will find ways to circumvent them. The eSafety Commissioner has indicated no further platforms will be added before the ban begins, focusing on ensuring compliance across the listed services. This initiative could inspire similar measures worldwide, but its success will depend on collaboration between governments, tech companies, and communities to safeguard young users in an interconnected digital era.
