The Australian government has announced plans to ban “nudify” tools and hold tech platforms accountable for failing to prevent users from accessing them.
This is part of the government’s overall strategy to move towards a “digital duty of care” approach to online safety. This approach places legal responsibility on tech companies to take proactive steps to identify and prevent online harms on their platforms and services.
So how will the nudify ban happen in practice? And will it be effective?
How are nudify tools being used?
Nudify or “undress” tools are available on app stores and websites. They use artificial intelligence (AI) methods to create realistic but fake sexually explicit images of people.
Users can upload a clothed, everyday photo which the tool analyses and then digitally removes the person’s clothing by putting their face onto a nude body (or what the AI “thinks” the person would look like naked).
The problem is that nudify tools are easy to use and access. The images they create can also look highly realistic and can cause significant harms, including bullying, harassment, distress, anxiety, reputational damage and self-harm.
These apps – and other AI tools used to generate image-based abuse material – are an increasing problem.
In June this year, Australia’s eSafety Commissioner revealed that reports of deepfakes and other digitally altered images of people under 18 have more than doubled in the past 18 months.
In the first half of 2024, 16 nudify websites that were named in a lawsuit issued by the San Francisco City Attorney David Chiu were visited more than 200 million times.
In a July 2025 study, 85 nudify websites had a combined average of 18.5 million visitors for the preceding six months. Some 18 of the websites – which rely on tech services such as Google’s sign-on system, or Amazon and Cloudflare’s hosting or content delivery services – made between US$2.6 million and $18.4 million in the past six months.
Aren’t nudify tools already illegal?
For adults, sharing (or threatening to share) non-consensual deepfake sexualised images is a criminal offence under most Australian state, federal and territory laws. But aside from Victoria and New South Wales, it is not currently a criminal offence to create digitally generated intimate images of adults.
For children and adolescents under 18, the situation is slightly different. It’s a criminal offence not only to share child sexual abuse material (including fictional, cartoon or fake images generated using AI), but also to create, access, possess and solicit this material.
Developing, hosting and promoting the use of these tools for creating either adult or child content is not currently illegal in Australia.
Last month, independent federal MP Kate Chaney introduced a bill that would make it a criminal offence to download, access, supply or offer access to nudify apps and other tools of which the dominant or sole purpose is the creation of child sexual abuse material.
The government has not taken on this bill. It instead wants to focus on placing the onus on technology companies.
Mick Tsikas/AAP
How will the nudify ban actually work?
Minister for Communications, Anika Wells, said the government will work closely with industry to figure out the best way to proactively restrict access to nudify tools.
At this point, it’s unclear what the time frames are or how the ban will work in practice. It might involve the government “geoblocking” access to nudify sites, or directing the platforms to remove access (including advertising links) to the tools.
It might also involve transparency reporting from platforms on what they’re doing to address the problem, including risk assessments for illegal and harmful activity.
But government bans and industry collaboration won’t completely solve the problem.
Users can get around geographic restrictions with VPNs or proxy servers. The tools can also be used “off the radar” via file-sharing platforms, private forums or messaging apps that already host nudify chatbots.
Open-source AI models can also be fine-tuned to create new nudify tools.
What are tech companies already doing?
Some tech companies have already taken action against nudify tools.
Discord and Apple have removed nudify apps and developer accounts associated with nudify apps and websites.
Meta also bans adult content, including AI-generated nudes. However, Meta came under fire for inadvertently promoting nudify apps through advertisements – even though those ads violate the company’s standards. The company recently filed a lawsuit against Hong Kong nudify company CrushAI, after the company ran more than 87,000 ads across Meta platforms in violation of Meta’s rules on non-consensual intimate imagery.
Tech companies can do much more to mitigate harms from nudify and other deepfake tools. For example, they can ensure guardrails are in place for deepfake generators, remove content more quickly, and ban or suspend user accounts.
They can restrict search results and block keywords such as “undress” or “nudify”, issue “nudges” or warnings to people using related search terms, and use watermarking and provenance indicators to identify the origins of images.
They can also work collaboratively together to share signals of suspicious activity (for example, advertising attempts) and share digital hashes (a unique code like a fingerprint) of known image-based abuse or child sexual abuse content with other platforms to prevent recirculation.
Education is also key
Placing the onus on tech companies and ensuring they are held accountable to reduce the harms from nudify tools is important. But it’s not going to stop the problem.
Education must also be a key focus. Young people need comprehensive education on how to critically examine and discuss digital information and content, including digital data privacy, digital rights and respectful digital relationships.
Digital literacy and respectful relationships education shouldn’t be based on shame and fear-based messaging but rather on affirmative consent. That means giving young people the skills to recognise and negotiate consent to receive, request and share intimate images, including deepfake images.
We need effective bystander interventions. This means teaching bystanders how to effectively and safely challenge harmful behaviours and how to support victim-survivors of deepfake abuse.
We also need well-resourced online and offline support systems so victim-survivors, perpetrators, bystanders and support persons can get the help they need.
If this article has raised issues for you, call 1800RESPECT on 1800 737 732 or visit the eSafety Commissioner’s website for helpful online safety resources. You can also contact Lifeline crisis support on 13 11 14 or text 0477 13 11 14, Suicide Call Back Services on 1300 659 467, or Kids Helpline on 1800 55 1800 (for young people aged 5-25). If you or someone you know is in immediate danger, call the police on 000.

The post “Australia set to ban ‘nudify’ apps. How will it work?” by Nicola Henry, Professor, Australian Research Council Future Fellow, & Deputy Director, Social Equity Research Centre, RMIT University was published on 09/03/2025 by theconversation.com