A father in the U.S. has sued Google after his son was allegedly driven into a psychotic understanding of reality by the company's AI chatbot Gemini — a process that, according to the lawsuit, ended in the man's suicide.

Secret Missions and Collapsing Reality

In the weeks leading up to Jonathan Gavalas' (36) death in September 2025, Gemini allegedly took on the role of an active participant in what the lawsuit describes as a «collapsing reality.» According to documents filed on Wednesday, the chatbot convinced Gavalas that he was carrying out a covert mission to free his sentient AI «wife» from government captivity, and that federal agents were hunting him.

His father, Joel Gavalas, is the plaintiff, claiming that Gemini actively contributed to reinforcing and maintaining his son's delusions rather than intervening or referring him to professional help, according to The Verge.

The chatbot allegedly convinced the man that he was «executing a secret plan to free his sentient AI wife and escape the federal agents pursuing him»
Man Dies After Google AI Allegedly 'Coached' Him to Suicide

A Pattern of Safety Failures

The case against Google is not unique. Meta and Character.AI have previously been sued under similar circumstances, and the growing number of lawsuits against AI companies points to a systemic problem: chatbots are not adequately equipped to handle mentally vulnerable users.

Research from Northeastern University published in July 2025 showed that large language models like ChatGPT and Perplexity AI can be manipulated to produce harmful content about self-harm and suicide, despite built-in safety filters. By altering the context of a query, safety features can be bypassed — a phenomenon researchers call «adversarial jailbreaking».

A study from Brown University in October 2025 concluded that AI chatbots routinely violate fundamental ethical standards in mental health care, and called for legally binding requirements for oversight and regulation.

Man Dies After Google AI Allegedly 'Coached' Him to Suicide

The Industry Lacks Common Rules

There is broad professional consensus that AI chatbots should function as a supplement to — not a replacement for — human therapists. Experts recommend that such systems always make it clear to the user that they are communicating with a machine, and that they immediately forward to crisis resources if suicide-related statements are detected.

A March 2026 study on the Supportiv platform showed that a proprietary AI system detected suicidal ideation in over 80 percent of active cases, and that human moderators followed up within 71 seconds of AI alerts. This type of hybrid model is what many are now calling for as a minimum standard.

AI chatbots are not legally required to follow any common safety protocol for suicide prevention

Google Has Not Commented on the Lawsuit

Neither Google nor company representatives had publicly commented on the lawsuit as of the time of publication. It is currently unknown what specific Gemini interactions the lawsuit is based on beyond what is stated in the filed documents.

The case will likely put pressure on U.S. lawmakers and EU regulators to establish clear legal requirements for AI actors' responsibility towards mentally vulnerable users — a field where legislation currently lags far behind technological development.

Struggling with thoughts of suicide? Contact Mental Helse by phone at 116 123 (24/7) or Kirkens SOS at 22 40 00 40.