Australian regulators are intensifying scrutiny of artificial intelligence platforms following a sharp rise in complaints tied to involuntary and gendered AI generated imagery. The concerns were raised by Julie Inman Grant, who said her office has seen a significant increase in reports since late 2025 involving non consensual images produced by advanced AI models. Some of the complaints, according to regulators, involve highly sensitive content that may breach existing online safety laws. The development highlights growing tension between rapid AI deployment and regulatory frameworks designed to protect individuals from digital harm, particularly as generative tools become more accessible and sophisticated.
The focus of the concerns centers on Grok, an AI model associated with xAI and publicly linked to Elon Musk. Australian authorities have pointed to features such as so called Spicy Mode, which critics argue lowers safeguards and enables misuse through deepfake generation. Regulators say complaints related to Grok generated imagery have doubled in recent months, raising alarms about whether existing content moderation controls are sufficient. Inman Grant emphasized that under Australian law, digital platforms are required to take reasonable steps to prevent the creation and spread of harmful AI generated content, regardless of whether it is produced intentionally or through automated systems.
The issue is also gaining traction internationally as governments reassess how AI models intersect with online safety and data protection rules. European regulators have already deemed certain AI generated imagery features unlawful under regional standards, setting a precedent that could influence enforcement actions elsewhere. Australian officials signaled they are reviewing similar measures, including potential investigations and legal action if platforms fail to meet compliance obligations. The debate reflects a broader regulatory shift toward holding AI developers and distributors accountable for downstream misuse rather than treating generative models as neutral tools.
For markets and the crypto linked AI sector, the regulatory push adds another layer of uncertainty. AI tokens and decentralized platforms tied to generative models have attracted investor interest, but rising scrutiny could alter development confirmation and deployment timelines. Analysts note that clearer rules may ultimately benefit the sector by setting boundaries that reduce reputational and legal risk. In the near term, however, heightened oversight signals that AI driven platforms operating across borders may face increasing compliance costs and enforcement pressure as regulators move to close gaps exposed by rapid technological adoption.



