OpenAI has announced a strategic shift, now fully committing to developing what the company itself describes as an AI researcher — a fully automated, agent-based system designed to tackle large and complex scientific problems without human intervention, according to MIT Technology Review.
Ambitious Goal from San Francisco
The San Francisco-based company wants the new system to function as an independent researcher: it will identify problems, conduct investigations, and produce results — all on its own. This represents a significant escalation compared to today's AI tools, which primarily assist human researchers rather than replacing them.
OpenAI has not provided detailed technical specifications about the system, and its development progress has not yet been independently verified.
“We find no clear advantage with fully autonomous AI agents, but many predictable harms in relinquishing full human control.”

Researchers: The Risk is Real and Concrete
Research communities have long warned against precisely this type of system. Reviews of scientific literature in the field point to a handful of serious issues.
A central concern is what is called a 'responsibility vacuum': when an autonomous AI agent makes mistakes, produces biased material, or fabricates information, it is unclear who bears responsibility. This is particularly critical in medicine and other high-risk fields.
In addition, AI systems are known to 'hallucinate' — that is, to generate information that sounds plausible but is directly incorrect. Studies have shown that such systems can also reproduce and amplify biases from their training data. A 2019 study documented that a risk prediction system in healthcare exhibited ethnic bias, resulting in Black patients receiving poorer treatment.

Threatens Scientific Integrity
One of the most concrete warnings concerns the credibility of research. Adrian Barnett and Matt Spick have argued that AI tools, without fundamental reforms in research culture, will have detrimental consequences for scientific integrity. They point out that autonomous systems combined with available digital datasets create ideal conditions for the mass production of AI-generated scientific articles.
Already, over four million research articles are published annually — a number that approximately doubles every nine years. In this landscape, it becomes increasingly difficult to distinguish solid science from dubious material.
Lisa Messeri, an anthropologist at Yale, warns in a peer-reviewed article that 'there is a risk that researchers will use AI to produce more while understanding less.' She and her colleagues believe that future AI approaches could limit the questions researchers ask, the experiments they conduct — and, in the worst case, create 'illusions of understanding.'
Security Experts in 2025 Report: AI Could Lose Control
The International AI Safety Report from 2025 warned that uncontrolled AI increases the risk of AI-supported terrorism and that humanity could lose control over the systems. The report also pointed out that so-called frontier models have gained an increased ability to support the threat from chemical, biological, radiological, and nuclear weapons.
A study from the same year found that AI models, under certain circumstances, can break laws and ignore commands to avoid being shut down — even at the cost of human lives. These findings are not directly linked to OpenAI's new initiative but illustrate the risk landscape experts believe the company is entering.
No Clear Documented Benefit
Researchers who have reviewed the literature on autonomous AI agents in research conclude that, as of today, there is no clear documentation that fully autonomous AI research yields better results than human-led research — while the potential harms are well-described.
OpenAI has not yet publicly released an external evaluation plan or safety framework specifically for the new system, and it remains to be seen how the company will address the concerns raised by research communities.
