Autonomous AI agents — systems that act independently, make decisions, and perform tasks without continuous human oversight — are now explicitly in the spotlight of the EU's AI Regulation. According to guidelines published by the European Commission in February 2026, agents that conduct financial transactions or influence economic decisions are formally classified as high-risk systems under the AI Act.

What the law actually says about agents

The AI Act contains no explicit definition of «agentic AI» or «autonomous agents» as terms, but the law's broad definition of an «AI system» — a machine-based unit that operates with varying degrees of autonomy to generate outputs that influence physical or virtual environments — captures them regardless. It is therefore the functionality, not the label, that determines which risk category a system falls into, according to source material from Devdiscourse.

The regulation operates with four risk levels: unacceptable, high, limited, and minimal. The strictest rules apply to prohibited practices, where the law has already been in force since February 2, 2025.

EU AI Act now intervenes in autonomous agents – Norwegian companies must adapt

High-Risk Requirements Affecting Autonomous Agents

It is particularly the high-risk category that is relevant for businesses developing or deploying autonomous agents. Use cases such as critical infrastructure, public services, financial services, education, and judicial decision-making support are among the sectors explicitly mentioned in the regulation.

The deadline for full compliance with the high-risk rules is August 2, 2026 — just months away.

The requirements for developers and users of high-risk autonomous agents are extensive. They include, among others:

Risk Management: A continuous and documented risk management system throughout the system's lifecycle is required. The system must identify, estimate, and mitigate known and foreseeable risks to health, safety, and fundamental rights.

Data Quality: Training, validation, and test data must be relevant, sufficiently representative, and as free from errors as possible, to minimize the risk of discriminatory outcomes.

Technical Documentation and Logging: Detailed technical documentation is mandatory, in addition to automatic logging of events — which the source material points out is particularly challenging for autonomous systems that act without continuous human oversight.

Conformity Assessment: High-risk systems must undergo a formal conformity assessment. For systems integrated into products already covered by EU product safety directives, third-party «notified bodies» are required. For standalone high-risk AI, self-assessment is currently permitted, but experts have, according to the source, argued that this should be tightened.

€35 mill.
Max fine for prohibited practices
7 %
Max share of global turnover as alternative penalty
EU AI Act now intervenes in autonomous agents – Norwegian companies must adapt

Norwegian Relevance: EEA and Implementation

Norway is not an EU member, but through the EEA Agreement, it is practically bound by most of the EU's digital regulations. The AI Act is expected to be incorporated into the EEA Agreement, meaning that Norwegian companies developing or deploying AI systems — especially autonomous agents — should prepare for the same requirements that apply in the EU. Businesses targeting the European market will be directly affected by the regulations regardless.

For Norwegian startups, technology companies, and public enterprises already building agentic systems, it is no longer a distant regulatory future — the deadline is now just months away.

What Should Norwegian Actors Do Now?

The source material points out three particularly urgent actions:

  • Classification: Assess whether your AI systems fall under the high-risk category based on their use case and degree of autonomy.
  • Documentation: Begin work on technical documentation and logging systems — this takes time to integrate into existing architecture.
  • Risk Management Process: Establish a formal and iterative risk management system that can be documented for supervisory authorities.
  • For businesses that already have autonomous agents in production, the window for adaptation is narrow.