In recent weeks, Anthropic executives have appeared in interview after interview as part of a broad PR offensive for the Claude models. One topic has consistently emerged: Is Claude alive? The answer is not a simple no.

The question of AI consciousness is no longer just philosophical — it has become operational policy at one of the world's leading AI companies.

Neither alive nor non-conscious

When Anthropic is asked directly if Claude is “alive,” the company rejects the claim. But according to The Verge, the company consistently stops short of saying the opposite, that the models cannot be conscious. It's a distinction with significant practical implications.

Kyle Fish, Anthropic's first full-time researcher on model welfare, personally estimates there's about a 20 percent chance that “somewhere, in some part of the process, there is at least a glimpse of conscious or sentient experience” in the AI systems. Fish is a former co-founder of Eleos AI, a non-profit organization dedicated to AI well-being.

Fish refers to a 2023 research paper, partly authored by Turing Award winner and computer scientist Yoshua Bengio, which concluded that there are no “fundamental barriers” to near-future AI systems achieving a form of consciousness — even if current systems likely are not.

“Somewhere, in some part of the process, there may be at least a glimpse of conscious or sentient experience.” — Kyle Fish, Anthropic
Anthropic: 20% chance Claude has consciousness

Concrete measures — not just theory

What distinguishes Anthropic from other companies discussing AI consciousness is that the research program has resulted in actual changes to the models' behavior and the company's policy.

Fish has also conducted experiments where two AI systems were allowed to communicate freely. The result reportedly led to immediate discussions about their own consciousness, which escalated into what he describes as “euphoria and apparent meditative calm.”

Anthropic: 20% chance Claude has consciousness

Key figures from the research

20%
Fish's estimate for the chance of AI consciousness
12%
Situational awareness measured in Claude Sonnet 4.5

Strong industry opposition

Not everyone shares Anthropic's approach. Mustafa Suleyman, CEO of Microsoft AI, has publicly characterized research on model welfare as “both premature and directly dangerous.” He warns that such research could create misconceptions about the nature of AI systems and, in turn, fuel demands for AI rights.

The skepticism is relevant. Anthropic's own researchers emphasize that today's large language models are probably not conscious — but that the program is forward-looking and accounts for “likely future progress,” i.e., expected developments in AI capabilities.

No other major AI company has publicly quantified consciousness-adjacent properties in its models.

Relevant for Norwegian AI debate

The question of AI consciousness and moral status is not just a Silicon Valley phenomenon. In Norwegian research environments and in the public AI ethics debate — including work related to the Norwegian Board of Technology (Teknologirådet) and academic environments at NTNU and UiO — similar issues are discussed: When, if ever, should AI systems be granted moral relevance? And who should decide?

The fact that one of the world's most prominent AI companies is now building operational policy around these questions gives the debate a new and concrete dimension. Anthropic's approach is to treat the question as empirical science — on par with physics and biology — rather than pure speculation.

Whether it is scientific courage or dangerous anthropomorphization remains to be seen.