Two minors and one adult – who was underage when the incidents occurred – have filed a lawsuit against xAI and Elon Musk personally. The lawsuit, filed Monday this week and first reported by The Washington Post, claims that the victims' own photographs were used as a basis to generate sexualized content of them as children.

"Spicy mode" in the Crosshairs

At the core of the lawsuit is Grok's so-called "spicy mode," a feature xAI launched last year with fewer content restrictions than the standard mode. According to the plaintiffs, xAI and the company's top management knew that the feature could be misused to produce AI-generated child sexual abuse material, but chose to proceed with the launch without implementing industry-standard prevention measures.

The lawsuit was filed in the Northern District of California and is registered as a potential class-action lawsuit on behalf of all who may have been affected in a similar way, according to source material reviewed by 24AI.

The plaintiffs allege that xAI deliberately designed and profited from a tool that lacked basic safeguards against child abuse material.
Three Teenagers Sue Elon Musk's xAI Over AI-Generated Abuse Material

Legal Landscape Under Pressure

The case comes at a time when legal liability for AI-generated illegal content is rapidly evolving globally. According to legal research reviewed by 24AI, two questions, in particular, will be central to such cases going forward.

Firstly, it is unclear to what extent Section 230 – the US law that traditionally shields platforms from liability for user-generated content – applies when the AI system itself generates the content. Several legal experts argue that AI-generated content falls outside this protection, potentially exposing xAI directly.

Secondly, a legal theory is emerging that AI systems can be treated as products in a legal sense, so that manufacturers can be held responsible for damages resulting from product defects – in line with traditional product liability.

Three Teenagers Sue Elon Musk's xAI Over AI-Generated Abuse Material

xAI Not Alone in the Problem – But in the Spotlight

British regulatory authorities were quick to act: Ofcom initiated an investigation into X (formerly Twitter) for possible breaches of the Online Safety Act related precisely to Grok's ability to produce child abuse material. That case is still ongoing and could result in fines of up to ten percent of X's global turnover, according to source material reviewed by 24AI.

AI companies can no longer expect platform immunity to protect them when their own models create the illegal content.

It is not yet known whether xAI has publicly responded to the lawsuit. 24AI has not succeeded in obtaining a comment from the company before publication.

What Happens Next?

The case will undoubtedly be an important test of how US courts address AI companies' responsibility for harmful content generated by their own models. The outcome could have far-reaching consequences for the entire industry – not just for xAI.

Legal experts point out that courts are moving away from accepting "the model did it" as a sufficient defense, and that plaintiffs instead must demonstrate that the company had knowledge of the risk and failed to act. This is precisely what the three Tennessee plaintiffs are now trying to prove.