Military artificial intelligence has long been a topic in defense policy circles, but now concrete demo images and Pentagon documents show how the technology is actually intended to be used — not just in theory, but in operational planning.
Chatbots Enter the Command Center
Palantir, the controversial data technology company with deep roots in American intelligence and defense, is demonstrating systems, according to Wired, where AI chatbots can be used to analyze intelligence data and propose concrete military courses of action. Among the models highlighted is Anthropic's Claude — one of the most widely used large language models in commercial use today.
The systems are designed as decision-support tools. This means that a human commander still formally makes the final decision — but the suggestions, analyses, and priorities come from an AI system that processes far more data than any human can handle in a short time.
A machine, bloodless and without morality or mortality, cannot grasp the significance of taking or maiming a human life.

The Core of the Ethical Minefield
The core of the debate is not just about technical capability, but about who bears the moral and legal responsibility when something goes wrong.
Ethicists and human rights organizations have long warned about what is called an accountability gap: when a machine proposes a target and a human operator approves it under time pressure, who is then responsible for civilian casualties? Existing frameworks in international humanitarian law (jus in bello) presuppose that an identifiable human can be held accountable — which becomes difficult to enforce when the decision is partially delegated to an algorithm.
Researcher and ethicist Robert Sparrow has pointed out that this fundamental condition of accountability risks being undermined by autonomous and semi-autonomous systems.

Automation Bias — The Hidden Danger
Although the systems are designed to support human decision-makers, it is well-documented that people in high-stress situations tend to blindly trust what the machine suggests. This phenomenon — automation bias — is particularly concerning in military contexts where the decision window can be measured in seconds.
Research cited by Wired suggests that this is not a hypothetical risk, but a well-documented cognitive weakness that is amplified under pressure. The consequence is that the formal human 'approval step' can, in practice, become more symbolic than real.
Relevant for Norway and NATO
Norway is not a bystander in this debate. As a NATO ally, Norwegian forces are part of an alliance where American systems and doctrines set the standard. If the Pentagon introduces AI-driven decision support in operational planning, it will affect how joint operations are planned and executed — including with Norwegian participation.
The Ministry of Defense has significantly increased appropriations for defense technology in recent years, and NATO has adopted principles for the responsible use of AI in military operations. However, there is a gap between overarching principles and concrete regulation of which AI systems can actually be used, and under what conditions.
Ethics experts, including UN Special Rapporteur Christof Heyns, have argued that decisions to take lives can never fully or should never be delegated to machines — no matter how sophisticated they are.
The Line Between Support and Autonomy
Palantir's demo images, according to Wired, show systems that are in a gray area: they are not fully autonomous, but they shape the decision space so powerfully that it is legitimate to question whether human control is real. Data protection authorities in several European countries have already pointed out that 'human-in-the-loop' models can provide false security if the human element is structurally weakened by time pressure and information overload.
The question that remains unanswered — and to which neither Palantir nor the Pentagon has provided a satisfactory answer — is: What is the threshold for when AI proposals are 'good enough' to be approved, and who sets that threshold?
