Elon Musk finally responded last week to widespread outrage about his social media platform X letting users create sexualised deepfakes with Grok, the platform’s artificial intelligence (AI) chatbot.
Musk has now assured the United Kingdom government he will block Grok from making deepfakes in order to comply with the law. But the change will likely only apply to users in the UK.
These latest complaints were hardly new, however. Last year, Grok users were able to “undress” posted pictures to produce images of women in underwear, swimwear or sexually suggestive positions. X’s “spicy” option let them to create topless images without any detailed prompting at all.
And such cases may be signs of things to come if governments aren’t more assertive about regulating AI.
Despite public outcry and growing scrutiny from regulatory bodies, X initially made little effort to address the issue and simply limited access to Grok on X to paying subscribers.
Various governments took action, with the UK announcing plans to legislate against deepfake tools, joining Denmark and Australia in seeking to criminalise such sexual material. UK regulator Ofcom launched an investigation of X, seemingly prompting Musk’s about-turn.
So far, the New Zealand government has been silent on the issue, even though domestic law is doing a poor job of preventing or criminalising non-consensual sexualised deepfakes.
Holding platforms accountable
The Harmful Digital Communications Act 2015 does offer some pathways to justice, but is far from perfect. Victims are required to show they’ve suffered “serious emotional distress”, which shifts focus to their response rather than the inherent wrong of non-consensual sexualisation.
Where images are entirely synthetic rather than “real” (generated without a reference photo, for example), legal protection becomes even less certain.
A members’ bill is expected to be introduced later this year that would criminalise the creation, possession and distribution of sexualised deepfakes without consent.
This reform is both necessary and welcome. But it only tackles part of the problem.
Criminalisation holds individuals accountable after harm has already occurred. It does not hold companies accountable for designing and deploying the AI tools that produce these images in the first place.
We expect social media providers to take down child sexual abuse material, so why not deepfakes of women? While users are responsible for their actions, platforms such as X provide an ease of access that removes the technical barrier to deepfake creation.
The Grok case has been in the news for many months, so the resulting harm is easily foreseeable. Treating such incidents as isolated misuse distracts from the platform’s responsibility.
Light-touch regulation is not working
Social media companies (including X) have signed the voluntary Aotearoa New Zealand Code of Practice for Online Safety and Harms, but this is already out of date.
The code does not set standards for generative AI, nor does it require risk assessments prior to implementing an AI tool, or set meaningful consequences for failing to prevent predictable forms of abuse.
This means X can get away with allowing Grok to produce deepfakes while still technically complying with the code.
Victims could also hold X responsible by complaining to the Privacy Commissioner under the Privacy Act 2020.
The commissioner’s guidance on AI suggests that both the use of someone’s image as a prompt and the generated deepfake could count as personal information.
However, these investigations can take years, and any compensation is usually small. Responsibility is often split among the user, the platform and the AI developer. This does little to make platforms or AI tools such as Grok safer in the first place.
New Zealand’s approach reflects a broader political preference for light-touch AI regulation that assumes technological development will be accompanied by adequate self-restraint and good-faith governance.
Clearly, this isn’t working. Competitive pressures to release new features quickly prioritise novelty and engagement over safety, with gendered harm often treated as an acceptable byproduct.
A sign of things to come
Technologies are shaped by the social conditions in which they are developed and deployed. Generative AI systems trained on masses of human data inevitably absorb misogynistic norms.
Integrating these systems into platforms without robust safeguards allows sexualised deepfakes that reinforce existing patterns of gender-based violence.
These harms extend beyond individual humiliation. The knowledge that a convincing sexualised image can be generated at any time – by anyone – creates an ongoing threat that alters how women engage online.
For politicians and other public figures, that threat can deter participation in public debate altogether. The cumulative effect is a narrowing of digital public space.
Criminalising deepfakes alone won’t fix this. New Zealand deserves a regulatory framework that recognises AI-enabled, gendered harm as foreseeable and systemic.
That means imposing clear obligations on companies that deploy these AI tools, including duties to assess risk, implement effective guardrails, and prevent predictable misuse before it occurs.
Grok offers an early signal of the challenges ahead. As AI becomes embedded across digital platforms, the gap between technological capabilities and legislation will continue to widen unless those in power take action.
At the same time, Elon Musk’s response to legislative action in the UK demonstrates how effective political will and robust regulation can be.
The authors acknowledge the contribution of Chris McGavin to the preparation of this article.
The post “Sexualised deepfakes on X are a sign of things to come. NZ law is already way behind” by Cassandra Mudgway, Senior Lecturer in Law, University of Canterbury was published on 01/20/2026 by theconversation.com











-2.png)









