A video deposition of Mira Murati, OpenAI's former Chief Technology Officer, was played in court on Wednesday during the ongoing Musk v. Altman lawsuit. In the recording, Murati testified under oath that CEO Sam Altman gave her erroneous information regarding whether a new AI model was subject to the company's safety board for deployment.

"Was he telling the truth?"

According to The Verge, Murati was directly asked in the deposition if she believed Altman was telling the truth when he claimed that OpenAI's legal department had assessed that the model in question did not require review by the company's deployment safety board. Her answer was unequivocally negative.

Murati, who left OpenAI in 2024 after being one of the company's most central leadership figures, explained that after the incident, she could no longer take Altman's word at face value. This is a particularly weighty testimony given Murati's former position and insight into the company's inner workings.

She explained that after the incident, she could no longer take Altman's word at face value

What is the deployment safety board?

OpenAI has had a structured security process for deploying new AI models, but the framework has changed over time. The company previously had a joint "Deployment Safety Board" (DSB) with Microsoft, which was to assess new models for risk before launch.

It is known that models like GPT-4 were publicly tested on at least one occasion without required DSB approval, which had already created internal unrest long before Murati's testimony.

The Superalignment Collapse as a Backdrop

Murati's testimony does not come in a vacuum. In May 2024, OpenAI's Superalignment team — dedicated to solving the core technical problems surrounding safe superintelligence — was dissolved after both co-leaders, Ilya Sutskever and Jan Leike, resigned from the company.

Leike stated upon his departure that the team had "sailed against the wind" and struggled with access to necessary data resources. He made no secret of his belief that safety culture and processes had been deprioritized in favor of new product launches — a characteristic that now appears more serious in light of Murati's testimony.

One by one, OpenAI's own safety architects have spoken out — and the stories point in the same direction

The Lawsuit That Strikes at the Core

The Musk v. Altman lawsuit, on the surface, concerns legal and contractual disputes, but has practically become an arena where OpenAI's internal conflicts regarding safety, governance, and leadership trust are exposed to the public. Murati is not the first high-ranking OpenAI figure to have expressed concern about the company's direction.

According to The Verge, no public responses have yet been issued by OpenAI or Altman to the specific allegations Murati makes in the deposition. 24AI continues to follow the lawsuit.