OpenAI has set an ambitious new goal: to develop a fully automated AI researcher — an agent-based system designed to tackle large and complex scientific challenges without the need for ongoing human guidance. This is reported by MIT Technology Review.
A New Major Initiative for OpenAI
This initiative represents a significant escalation from today's AI tools, which primarily function as assistants for human researchers. What OpenAI is now aiming for is a system that can actively plan, conduct, and interpret research independently — from hypothesis generation to analysis and conclusion.
Details regarding the system's architecture and timeline are currently scarce, but according to MIT Technology Review, the company has presented this as a central ambition in its continued development of agent-based systems.
“Large Language Models are Pandora's box for academic research. They can eliminate academic independence, creativity, and independent thinking — but they can also enable unimaginable co-creation and productivity.” — Professor Julian Savulescu, The Uehiro Oxford Institute

Ethical Questions Loom Large
Experts are far from unanimous in their assessment of what automated AI research will entail. According to available research in the field, there are a number of serious ethical issues that have not yet been resolved.
A central point is what is described as a “responsibility gap”: when an AI system makes errors, produces biased results, or fabricates information, it is unclear who bears the legal and moral responsibility. In medical research, the consequences of such errors can be severe.
Additionally, there is a risk that existing biases in training data will be amplified and reflected in research results. A lack of transparency about how the models make decisions — the so-called “black box” problem — makes it difficult to verify findings.
Professor Timo Minssen from the University of Copenhagen emphasizes that ethical guidelines are “absolutely necessary to shape the ethical use of AI in academic research.”

Who Will Lose Their Jobs?
The question of job displacement is more nuanced than many fear, but that does not mean everyone is equally vulnerable. A working paper from Stanford from August 2025 found that early-career workers aged 22–25, in professions with high AI exposure, experienced a 13 percent decline in employment compared to less exposed professions — measured between late 2022 and July 2025.
The dominant trend appears to be job transformation rather than complete job disappearance. Research suggests that approximately 12 million jobs in the US — equivalent to 7.8 percent of the workforce — are currently performed with at least 50 percent use of generative AI.
Analysts nevertheless point out that new roles may emerge — such as “AI validation experts” — to ensure the quality of AI-generated research and maintain ethical standards.
A Competence Risk for Future Researchers
A concern raised in research communities is that the automation of basic research tasks could undermine the training of new generations of researchers. When fundamental methodological skills are replaced by AI systems, there is a risk that these skills will no longer be developed in young professionals.
It is currently unclear to what extent OpenAI's new initiative will address these challenges, or whether the company prioritizes delivering technological capacity first and regulatory and ethical frameworks later.
