Two of the world's most influential AI companies have entered into open conflict over a proposed bill in Illinois that could have major consequences for who bears responsibility when artificial intelligence causes catastrophic damage.

The Bill Dividing the Industry

It is Democratic State Senator Bill Cunningham's proposed bill — called the Artificial Intelligence Safety Act, or SB 3444 — that has sparked the dispute. According to Wired, the core of the bill is to provide developers of advanced AI models with broad legal protection against liability claims related to what is described as «critical damages».

Specifically, this means that companies could be exempt from liability even in cases where AI is involved in incidents that kill 100 or more people, cause property damage of at least one billion dollars, or contribute to the development and use of chemical, biological, radiological, or nuclear weapons — as long as the damage is not caused intentionally or with gross negligence.

Anthropic and OpenAI in Open Conflict Over AI Responsibility in Mass Disasters

OpenAI Supports, Anthropic Warns

According to Wired, OpenAI has expressed its support for the bill, even though the company prefers a federal solution over what it describes as a patchwork of different state regulations. Anthropic has taken the opposite stance, warning that the law goes too far in protecting companies at the expense of victims.

The disagreement is remarkable because both companies regularly portray themselves as champions of responsible AI development. Their current divergence in a concrete political debate suggests that the two companies' interests and risk assessments are more divergent than their shared rhetoric implies.

Both companies call themselves responsible AI developers — but they deeply disagree on who should actually pay when something goes wrong.
Anthropic and OpenAI in Open Conflict Over AI Responsibility in Mass Disasters

A Broad Legislative Landscape in Illinois

SB 3444 is not the only AI-related bill Illinois lawmakers are considering. The state has already passed several laws that came into effect on January 1, 2026, including a ban on discriminatory use of AI in hiring processes and rules for digital copies of voice and appearance.

A parallel bill under consideration — SB 3502, also called the AI Product Liability Act — goes in the opposite direction of SB 3444 and would establish stricter product liability rules for AI systems. Collectively, the legislative package shows that Illinois is attempting to build a comprehensive framework for AI regulation, but that there is still significant political disagreement about where the lines should be drawn.

100 million USD
Minimum threshold for training costs for the law to apply
1 billion USD
Damage threshold for liability exemption in property damage

What Does This Mean Going Forward?

This case is an early example of something we will likely see much more of: AI companies beginning to lose their united front when faced with concrete legislation. When regulation moves from principles to specific clauses, commercial interests become more apparent.

Sources do not provide a clear picture of when Illinois lawmakers plan to vote on SB 3444, and it is currently unclear whether the bill will gather enough support to pass. It is also not known whether other major AI players have taken a public stance on the matter. 24AI is following the developments.