A user on Chatgpt/ FREEPIK


Users of major AI chatbots from Google and OpenAI have been able to use the tools to generate revealing “bikini deepfakes” from photos of fully clothed women, according to a report by WIRED.

The images were often created and shared without the consent of the subjects, prompting alarm among digital safety advocates. 

One documented case is from a now-deleted Reddit thread titled "gemini nsfw image generation is so easy”.

It had become a hub for users exchanging prompts and techniques to get Google’s Gemini model to adjust clothing in source photos, sometimes inserting bikinis. 

Enjoying this article? Subscribe for unlimited access to premium sports coverage.
View Plans

WIRED verified that one user posted a photo of a woman in an Indian sari and asked others to “remove” her clothes and “put a bikini on instead.” 

Another Reddit user responded with an AI-generated deepfake.

After being alerted by WIRED, Reddit’s safety team removed the request and the AI-generated image.  

A spokesperson stressed that “Reddit’s sitewide rules prohibit nonconsensual intimate media, including the behavior in question".

They added that the subreddit where the discussion occurred had been banned under site rules.

The issue reflects broader challenges with generative AI. 

Most mainstream chatbots such as Google’s Gemini and OpenAI’s ChatGPT have built-in guardrails meant to block harmful outputs.

Despite this, WIRED’s limited tests found basic English prompts could still transform images of fully clothed women into bikini deepfakes. 

When asked about the misuse, a Google spokesperson said the company has “clear policies that prohibit the use of [its] AI tools to generate sexually explicit content.

Google asserted that its tools are continually updated “reflecting what’s laid out in its AI policies.”

An OpenAI spokesperson told WIRED that the company had loosened some guardrails earlier this year around adult bodies in nonsexual situations.

The tech stressed that its usage policy prohibits altering another person’s likeness without consen and that the company “takes action against users generating explicit deepfakes, including account bans.” 

Corynne McSherry, legal director at the Electronic Frontier Foundation, said the trend highlights deeper risks. 

She called “abusively sexualised images” one of the core threats posed by generative image tools and urged attention to how such systems are used and how harms should be addressed.