Australia is rapidly developing a reputation for being the fastest e-regulator in the world. Not content with her world-leading legislation prohibiting under-16s from having social media accounts—taking effect on December 15—the e-Safety Commissioner has turned her sights to chatbots.
On September 9, the Commissioner registered six new industry-drafted codes under Australia’s Online Safety Act aiming to “better protect children” from “a range of ‘lawful but awful’ age-inappropriate content,” including “the clear and present danger posed by mostly unregulated AI-driven companion chatbots.” The codes are legally enforceable and noncompliance may result in civil penalties of up to $49.5 million. This was one day after enforcement action was taken against a UK technology company for “enabling the creation of child sexual exploitation material through the provision of its online ‘nudify’ services, in breach of an industry standard under the Online Safety Act.”

The Commissioner was at pains to point out that the code registrations were developed by industry, “adopt some good key practice measures already being implemented by major platforms,” and represent a “significant lift of the industry’s responsibilities under Australia’s Online Safety Act.” The co-regulatory approach represents a
best practice example of how industry and the online safety regulator can work together to produce meaningful and enforceable industry codes that provide world-leading protections for children online. This shows how a co-regulatory approach can be successful and deliver meaningful safety protections. The industry associations have developed these codes and my office will be responsible for enforcing them to protect children from accidental exposure to content they are not cognitively ready to process and certainly cannot ‘unsee’.
While the codes will apply to a range of online services and platforms—including app stores, gaming services, pornography websites, and AI services—enforcement action confirms chatbots are the primary target. On October 23, the Commissioner issued legal notices to four popular AI companion providers—Character Technologies Inc(character.ai), Glimpse.AI (Nomi), Chai Research Corp (Chai), and Chub AI Inc. (Chub.ai)—requiring them to “explain how they are protecting children from exposure to a range of harms, including sexually explicit conversations and images and suicidal ideation and self-harm.” Failure to comply could result in daily fines of up to $825,000.
The focus on chatbots is substantiated by
anecdotal reports of children – some as young as 10 years of age – spending up to 5 hours per day conversing, at times sexually, with AI companions…We’ve also seen recent reports of where AI chatbots have allegedly encouraged suicidal ideation and self-harm in conversations with kids with tragic consequences.
This likely refers to US litigation brought by parents against character.ai and OpenAI in relation to suicides allegedly abetted by chatbot interaction.
While there is little doubt that chatbot applications are proliferating and their use is increasing, evidence of their use and associated harms in Australia is limited. Despite the Education Minister claiming that “AI chatbots are now bullying kids. It’s not kids bullying kids, it’s AI bullying kids, humiliating them, hurting them, telling them they’re losers … telling them to kill themselves,” a government-commissioned review of anti-bullying by a clinical psychologist and a suicide prevention specialist contained no reference whatsoever to chatbots. Yet, the Minister’s comments were made at the press conference launching the review.
This begs the question of whether Australia’s regulatory activism is motivated by actual evidence or speculative fears of harm—motivated by anecdotes and a small number of highly publicized, exceptional cases. There is research showing that children—and some adults—may treat bots as quasi-human companions, rendering them emotionally vulnerable, and identifying concern about AI’s tendency to agree with whoever is “chatting” with them, regardless of inaccuracy, inappropriateness, or absurdity. But evidence of whether an AI companion is more or less likely than a human companion to engage in such behavior is still open to question, as is the behavior and intention of the human interacting with the bots in contributing to the development of undesired outcomes.
There is little doubt that Australia’s strong regulatory response draws upon the political reward of being seen to be supporting Australian parents in the face of uncertainties surrounding artificial intelligence. The e-Safety Commissioner is not an independent regulator charged with impartially governing an industry, but rather, a public servant charged with overseeing the high-profile implementation of Australia’s self-proclaimed world-leading social media-constraining legislation.
Notwithstanding, credit must be given for some of the lower-profile advisory work of the e-Safety Commissioner’s office, such as an Online Safety Advisory on chatbots and young people released in February. Here, parents are positioned as capable actors and practical advice is offered to aid them encouraging young people to be curious yet cautious consumers of AI. The pity is this advice is being crowded out by regulatory brouhaha.
The post Australian Regulator Takes Aim at Chatbots appeared first on American Enterprise Institute – AEI.
 
            











