In March 2026, the White House presented a national legislative framework for artificial intelligence — a document that, according to critics, has one clear function: to dismantle states' own AI rules without replacing them with anything comparably robust at the federal level.
The Framework: Deregulation as Strategy
The Trump administration's approach to AI regulation has been consistent since the president's return to power. In December 2025, Trump signed a presidential order explicitly focused on deregulation and preventing states from enacting their own AI rules. The new March 2026 framework continues this approach and is, according to the administration, intended to ensure the US maintains its technological leadership.
In other words: fewer rules, faster development, and one national standard instead of a patchwork of state legislation.
"An extraordinary payback to Big Tech companies — at the expense of ordinary Americans"
Strong Criticism from Consumer Advocates
Reactions were swift. Robert Weissman, co-president of the consumer organization Public Citizen, characterized the framework as "a national framework to protect Big Tech at the expense of ordinary Americans," according to research sources cited by Mondaq and EU AI Act News. Weissman claims that the document's only truly binding provision is precisely the preemption clause — that is, the overriding mechanism against state legislation.
His conclusion is concise: without that clause, the framework would contain no meaningful regulation of AI at all, except in narrow cases such as non-consensual intimate deepfakes.
Representative Don Beyer (D-Va.) supports this view, emphasizing that the framework "solidifies the Trump administration's commitment to preempt state AI laws — without establishing clear, enforceable federal safeguards."
Regulatory Vacuum or Necessary Simplification?
A central question is whether the federal approach actually makes it easier for businesses to navigate regulations — or if, on the contrary, it creates more confusion.
Legal scholars cited in the research material warn that invalidating state laws without a solid federal alternative will lead to "years of uncertainty about which laws apply and which agencies will enforce them." It is thus not necessarily an easier world awaiting businesses — rather, a more unpredictable one.
History: From Biden to Trump
This is not the first time an AI framework from the White House has caused a stir. The Biden administration's October 2023 executive order — all 111 pages of it — was criticized by business organizations like NetChoice and the U.S. Chamber of Commerce for being "too confusing, too broad, and potentially stifling to innovation," according to statements cited in the research material.
NetChoice, representing companies like Amazon, Google, and Meta, warned at the time that broad regulation would shut out new players and "significantly expand the federal government's power over American innovation." Biden's "Blueprint for an AI Bill of Rights" from 2022, on the other hand, was criticized for lacking legal force and for not going far enough in protecting workers and civil rights.
The pattern is thus relatively stable: Regardless of the administration in power, AI policy is controversial — either because it does too much, or too little.
What Now?
With over 38 states having already enacted their own AI laws, and a federal framework actively seeking to supersede these regulations, a legal battle between federal and state authorities is likely. The outcome will have major consequences — not only for American companies, but also for foreign players operating in the US market, including Norwegian and European tech companies that must adhere to both sets of regulations simultaneously.
Which rules actually apply — and who enforces them — remains unclear for now.