X Is Promoting Nonconsensual AI-Generated Sexual Images: The Law and Society Must Evolve

X Is Promoting Nonconsensual AI-Generated Sexual Images: The Law and Society Must Evolve

X (formerly Twitter) has become a site for the rapid spread of artificial intelligence-generated nonconsensual sexual images (also known as “deepfakes”).

Using the platform’s own built-in generative AI chatbot, Grok, users can edit images they upload through simple voice or text prompts.

Various media outlets have reported that users are using Grok to create sexualised images of identifiable individuals. These have been primarily of women, but also children. These images are openly visible to users on X.

Users are modifying existing photos to depict individuals as unclothed or in degrading sexual scenarios, often in direct response to their posts on the platform.

Reports say the platform is currently generating one nonconsensual sexualised deepfake image a minute. These images are being shared in an attempt to harass, demean or silence individuals.

A former partner of X owner Elon Musk, Ashley St Clair, said she felt “horrified and violated” after Grok was used to create fake sexualised images of her, including of when she was a child.

Here’s where the law stands on the creation and sharing of these images – and what needs to be done.

Image-based abuse and the law

Creating or sharing nonconsensual, AI-generated sexualised images is a form of image-based sexual abuse.

In Australia, sharing (or threatening to share) nonconsensual sexualised images of adults, including AI-generated images, is a criminal offence under most Australian state, federal and territory laws.

But outside of Victoria and New South Wales, it is not a criminal offence to create AI-generated, nonconsensual sexual images of adults or to use the tools to do so.

It is a criminal offence to create, share, access, possess and solicit sexual images of children and adolescents. This includes fictional, cartoon or AI-generated images.

The Australian government has plans underway to ban “nudify” apps, with the United Kingdom following suit. However, Grok is a general-purpose tool rather than a purpose-built nudification app. This places it outside the scope of current proposals targeting tools designed primarily for sexualisation.




Read more:
Australia set to ban ‘nudify’ apps. How will it work?


Holding platforms accountable

Tech companies should be made responsible for detecting, preventing and responding to image-based sexual abuse on their platforms.

They can ensure safer spaces by implementing effective safeguards to prevent the creation and circulation of abusive content, responding promptly to reports of abuse, and removing harmful content quickly when made aware of it.

X’s acceptable use policy prohibits “depicting likenesses of persons in a pornographic manner” as well as “the sexualization or exploitation of children”. The platform’s adult content policy stipulates content must be “consensually produced and distributed”.

X has become a site for the rapid spread of AI-generated nonconsensual sexual images.
AP Photo/Noah Berger

X has said it will suspend users who create nonconsensual AI-generated sexual images. But post-hoc enforcement alone is not sufficient.

Platforms should prioritise safety-by-design approaches. This would include disabling system features that enable the creation of these images, rather than relying primarily on sanctions after harm has occurred.

In Australia, platforms can face takedown notices for image-based abuse and child sexual abuse material, as well as hefty civil penalties for failure to remove the content within specified timeframes. However, it may be difficult to get platforms to comply.

What next?

Multiple countries have called for X to act, including implementing mandatory safeguards and stronger platform accountability. Australia’s eSafety Commissioner Julie Inman Grant is seeking to shut down this feature.

In Australia, AI chatbots and companions are noted for further regulation. They are included in the impending industry codes designed to protect users and regulate the tech industry.

Individuals who intentionally create nonconsensual sexual deepfakes play a direct role in causing harm, and should be held accountable too.

Several jurisdictions in Australia and internationally are moving in this direction, criminalising not only the distribution but also the creation these images. This recognises harm can occur even in the absence of widespread dissemination.

Individual-level criminalisation must be accompanied by proportionate enforcement, clear intent thresholds and safeguards against overreach, particularly in cases involving minors or lack of malicious intent.

Effective responses require a dual approach. There must be deterrence and accountability for deliberate creators of nonconsensual sexual AI-generated images. There must also be platform-level prevention that limits opportunities for abuse before harm occurs.

Some X users are suggesting individuals should not upload images of themselves to X. This amounts to victim blaming and mirrors harmful rape culture narratives. Anyone should be able to upload their content without being at risk of having their images doctored to create pornographic material.

Hugely concerning is how rapidly this behaviour has become widespread and normalised.

Such actions indicate a sense of entitlement, disrespect and lack of regard for women and their bodies. The tech is being used to further humiliate certain populations, for example sexualising images of Muslim women wearing the hijab, headscarfs or tudungs.

The widespread nature of the Grok sexualised deepfakes incident also shows a universal lack of empathy and understanding of and disregard for consent. Prevention work is also needed.

If you or someone you know has been impacted

If you have been impacted by nonconsensual images, there are services you can contact and resources available.

The Australian eSafety Commissioner currently provides advice on Grok and how to report harm. X also provides advice on how to report to X and how to remove your data.

If this article has raised issues for you, you can call 1800RESPECT on 1800 737 732 or visit the eSafety Commissioner’s website for helpful online safety resources.

You can also contact Lifeline crisis support on 13 11 14 or text 0477 13 11 14, Suicide Call Back Services on 1300 659 467, or Kids Helpline on 1800 55 1800 (for young people aged 5–25). If you or someone you know is in immediate danger, call the police on 000.

The post “X is facilitating nonconsensual sexual AI-generated images. The law – and society – must catch up” by Giselle Woodley, Lecturer and Research Fellow in Communications, Edith Cowan University was published on 01/07/2026 by theconversation.com