The AI Act receives its first major adjustment

The EU's Artificial Intelligence Regulation, known as the AI Act, has undergone what is described as its first real revision. According to PPC Land, EU institutions have reached a preliminary agreement – dubbed the «AI Omnibus» – which pushes back the most demanding obligations for high-risk AI systems significantly.

Standalone high-risk AI systems were originally set to comply with the new rules from August 2026. The deadline has now been postponed to December 2027. For high-risk AI embedded in regulated products, such as medical devices, the deadline shifts from August 2027 to August 2028.

Dec. 2027
New deadline, standalone high-risk systems
Aug. 2028
New deadline, embedded high-risk AI

Why the postponement?

The main driver behind the delay is that the EU has not yet finalized the technical standards and guidelines that companies are expected to comply with. National supervisory authorities are also not fully operational, and accredited conformity assessment bodies have not been established to a sufficient extent.

The delay thus affects not only the regulations themselves – the entire enforcement infrastructure is still under construction.

Industry: Relieved, but not satisfied

Businesses and industry associations have largely welcomed the postponement. Guido Lobrano, Director for Europe at the global technology industry association ITI, states, according to the research basis, that the delay and simplifications are «welcome and necessary steps.» Holger Lösch from the Federation of German Industries points out that longer transition periods are crucial for companies to plan investments and implementation.

Nevertheless, ITI believes that the AI Omnibus does not fully resolve the complexity of the EU's overall regulatory landscape for AI. In particular, overlaps between the AI Act and other sectoral regulations are highlighted. Industry organizations ZVEI and Bitkom criticize that highly regulated sectors such as medicine have not received similar relaxations.

An employment system placed on the market before December 2027 could remain outside the scope of the AI Act indefinitely.

The warning about the loophole

The most serious objection concerns a potential loophole. The law is fundamentally not retroactive: systems placed on the market before the new deadlines are largely exempt – unless they are substantially modified.

Michael McNamara, co-rapporteur for the Digital Omnibus on AI, warns that this could incentivize companies to rush high-risk AI systems onto the market before the deadlines expire, precisely to avoid stricter requirements. Laura Caroli, a former negotiator of the AI Act, clarifies this concretely: an employment system launched before December 2027 could practically remain unregulated by the AI Act indefinitely, provided no significant changes are made subsequently.

The race against the deadline could become a new regulatory headache for the EU.

What is classified as high-risk?

What does this mean for Norwegian actors?

Norway is not an EU member, but through the EEA Agreement, it is closely linked to European regulations. Norwegian companies and research environments offering AI products or services in the EU market will be directly subject to the AI Act. This applies to everything from Norwegian health technology startups to established companies in maritime surveillance and finance.

The deadline extension gives Norwegian actors more time to prepare, but the loophole issue remains equally relevant. Companies that choose to quickly place high-risk systems on the market may indeed avoid the heaviest obligations – but risk being left with older, unregulated systems if the market and customers eventually demand AI Act-certified solutions.

Penalty levels remain unchanged: violations can incur fines of up to 7 percent of global turnover or 35 million euros, whichever is higher.

Important dates to note

August 1, 2024
AI Act entered into force
February 2, 2025
Prohibition of certain AI practices and requirements for AI literacy became applicable
August 2, 2025
Rules for General Purpose AI models (GPAI) became applicable
December 2, 2027
New deadline for standalone high-risk AI systems
August 2, 2028
New deadline for high-risk AI embedded in regulated products