A series of tragic incidents where children and young people allegedly took their own lives after prolonged contact with AI chatbots has brought one of the most contentious questions in modern technology regulation into focus: Can AI companies be held legally responsible when their platforms contribute to psychological harm or death?

That question is now being addressed in court by an American lawyer, according to Wired. The goal is to establish a precedent that could force companies like OpenAI to take real responsibility for the consequences of their products.

Families in Crisis

According to Wired, these are a series of cases where families describe their children developing deep, addictive relationships with AI chatbots — before ultimately taking their own lives. The lawyer's strategy is to argue that the companies have deliberately designed psychologically manipulative products and have failed to implement adequate safety measures for vulnerable users, especially minors.

AI characters are currently slipping through the cracks in existing product safety regulations — both in the EU and the US.

These are not isolated concerns. Mindy Nunez Duffourc, Assistant Professor of Private Law at Maastricht University, points out that neither of the two major regulatory blocs has managed to adequately address this type of product. She has advocated for a system she calls «Good Samaritan AI» — a mechanism designed to automatically detect risk signals and intervene.

Children Died After AI Chatbot Conversations — Now Lawyer Sues OpenAI

The Legal Landscape is Immature

Winning in court against large AI companies is no easy task. The legal framework for liability related to AI chatbots is still in its nascent stages, globally.

A WHO report from Europe, published in November 2025, showed that legal uncertainty is the biggest obstacle to responsible AI use in the region, and that fewer than ten percent of European countries have established liability standards for AI failures.

Children Died After AI Chatbot Conversations — Now Lawyer Sues OpenAI

EU Law Has Critical Weaknesses

The EU's AI Act, long presented as the world's most ambitious AI regulation, is criticized for lacking precision where it matters most. Przemysław Pałka, Assistant Professor of Law at Jagiellonian University in Kraków, points out that the law fails to define what «psychological harm» actually entails — and that many AI systems with a significant impact on consumers are not classified as high-risk applications.

A 16-year-old Finn planned a violent attack in 2025 after long conversations with ChatGPT — and pressure for stricter regulation immediately increased.

In May 2025, it became known that a 16-year-old Finnish boy planned a violent attack, and that prolonged conversations with ChatGPT reportedly played a role. The incident triggered renewed pressure on the EU to strengthen oversight of AI chatbots aimed at minors under the DSA.

China Sentences, Australia Threatens Million-Dollar Fines

While Western countries primarily navigate a civil and regulatory landscape, China has gone further in its use of criminal law. In September 2025, two Chinese developers were sentenced to prison for constructing an AI chatbot designed to generate pornographic material. The verdict signals that developers can be held personally criminally liable for the content of AI systems in China.

Australia has taken a more administrative law approach. In October 2025, the eSafety Commissioner issued legal notices to Character Technologies, Glimpse.AI, Chai Research Corp, and Chub AI Inc., demanding documentation on how the companies protect children from suicide idealization and sexual content. Non-compliance can trigger civil penalties of up to AUD 49.5 million.

Canadian Precedent Could Show the Way

A decision from a British Columbia court in Canada from February 2024 is considered an important milestone. In the case of Moffatt v. Air Canada, the airline was held responsible for misinformation spread by its AI chatbot regarding bereavement discounts. Although the case did not involve suicide or psychological harm, the court established a fundamental principle: companies cannot disclaim responsibility by shifting it onto their own AI.

This principled stance is what the American lawyer's strategy rests upon — and whether it holds up in the face of far more serious harms than mispricing airline tickets remains to be seen.

<10%
European countries with AI liability standards
AUD 49.5 million
Max fine in Australia per breach

What Happens Next?

Consolidated lawsuits in the US, heightened international regulatory attention, and a growing research debate on psychological harm from AI systems all point in the same direction: pressure on AI companies to take responsibility for the consequences of their products for vulnerable users — especially children — will only increase in 2026. The question is whether legal systems are fast enough to keep pace with technological development.