Elon Musk warns against using Grok to create illegal content / SCREENGRAB BBC





Billionaire businessman Elon Musk has rejected allegations that his AI chatbot Grok has generated illegal images of minors, insisting that there is “literally zero” evidence to support the claims.

Enjoying this article? Subscribe for unlimited access to premium sports coverage.
View Plans

Musk acknowledged that adversarial prompting or “hacking” attempts could sometimes produce unexpected outputs, but said such cases are treated as technical issues that are fixed immediately.

In a statement, Musk said he was not aware of any naked underage images generated by Grok, adding that the platform does not create images unless prompted by a user and is designed to block unlawful material.

“Obviously, Grok does not spontaneously generate images; it does so only according to user requests. When asked to generate images, it will refuse to produce anything illegal, as the operating principle for Grok is to obey the laws of any given country or state,” he said on X.

Grok, developed by Musk’s company xAI, was launched in 2023 and integrated into the X social media platform in 2024.

The chatbot can generate text and images, similar to competing systems from major AI developers.

The allegations emerged amid a broader global debate over AI safety, digital child protection, and regulatory frameworks governing generative AI systems.

On January 4, Musk warned that anyone who uses Grok, the artificial intelligence chatbot developed by his company xAI, to generate illegal content will face the same legal consequences as if they had uploaded or shared such material themselves.

Musk said the use of AI tools does not exempt users from legal responsibility, stressing that accountability rests with the individual who prompts and disseminates the content.

“Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content,” Musk said.

The remarks come amid heightened global scrutiny of generative AI platforms and concerns over their potential misuse, including the creation of harmful or unlawful material.

Musk's Grok said it was scrambling to fix flaws in the AI tool after users claimed it turned pictures of children or women into erotic images.

"We've identified lapses in safeguards and are urgently fixing them. CSAM (Child Sexual Abuse Material) is illegal and prohibited,” Grok said in a post on X.

Complaints of abuse began emerging on X after Grok rolled out an “edit image” feature in late December.

The tool allows users to modify images posted on the platform, prompting concerns after some users allegedly used it to partially or fully remove clothing from images of women or children.

In a statement, X Safety said the platform takes action against illegal content, including Child Sexual Abuse Material (CSAM), through removal of content, permanent suspension of accounts, and cooperation with local governments and law enforcement where necessary.

X Safety emphasised that users who prompt or use Grok to generate illegal material are treated no differently from those who directly upload such content to the platform.