EU Parliament slows down AI Act implementation
The EU Parliament decided in March 2026 to postpone deadlines for parts of the groundbreaking AI Act – the most comprehensive regulatory framework for artificial intelligence ever adopted. The decision is part of a broader simplification called the "omnibus proposal," and means that a number of obligations for high-risk AI systems are being pushed back, according to PYMNTS.com.
For Norwegian stakeholders, this is relevant: Through the EEA Agreement, the AI Act will in practice apply to Norwegian companies and public agencies that supply products or services in the EU market.
New Deadlines – What's Postponed and Until When?
The original AI Act has a phased introduction, with various provisions entering into force between 2025 and 2030. The most resource-intensive requirements apply to high-risk AI systems – and it is precisely these that are now being postponed.
In addition to the high-risk deadline, it has been proposed that AI systems regulated by existing EU sectoral legislation on safety and market surveillance do not need to comply with the requirements until August 2, 2028. It has also been decided to give providers until November 2, 2026, to implement watermarking of AI-generated content.

Three Reasons for the Delay
The reasons for the postponement are complex, but largely point to systemic failures in the implementation itself.
Standards are not ready. Harmonized technical standards – which are necessary for companies to actually know what is required of them – are still under development. The EU's Joint Research Centre (JRC) has emphasized that these standards must be in place before the obligations for high-risk systems come into effect.
The AI Office is understaffed. The EU AI Office, which is to enforce the law, is planned to have 85 employees – but only 30 of these are currently working on the implementation, according to research material. In comparison, the UK AI Safety Institute has over 150 employees.
Timelines for Codes of Practice are too tight. The development of the so-called "Codes of Practice" – concrete guidelines for AI developers – involves consulting around 1,000 stakeholders, including businesses, authorities, and academia. The timelines have been criticized as unrealistic. Boniface de Champris from the Computer and Communications Industry Association (CCIA) stated that "the AI Act's weaknesses, particularly the overly tight timeline for applying the rules, have already become apparent."

What Does This Mean for Norwegian Stakeholders?
Norway is not an EU member, but through the EEA Agreement, it is closely integrated into the EU's internal market. This means that Norwegian companies offering AI solutions in the EU – for example, in healthcare, finance, recruitment, or public administration – will in practice have to comply with the AI Act.
The postponement provides real, albeit limited, room for maneuver. Norwegian companies and research environments already engaged in adaptation work can use the extra time to await clearer technical standards and more mature guidance documents – rather than investing heavily in an interpretive landscape that is still evolving.
Criticism from Both Sides
Reactions from the business community are mixed. Many business leaders have expressed concern that strict rules could hinder innovation and weaken European competitiveness at a time when the USA and China are investing massively in AI without similar regulatory burdens.
At the same time, Gartner analyst Nader Henein warns that the main reason the original timeline did not hold was that "regulators would not have been ready to enforce."
For Norwegian stakeholders planning long-term in an EU-affiliated market, the message is clear: The postponement is not a signal that the requirements will not come – it is a signal that they will come more thoroughly prepared.
