OpenAI is taking a significant step into the cybersecurity field. The company has announced the «Trusted Access for Cyber» program, where leading security firms and larger enterprises will gain access to the new AI model GPT-5.4-Cyber — as well as ten million dollars distributed in API access — with the goal of strengthening global cyber defense. This is stated in an announcement on OpenAI's official blog.

What is GPT-5.4-Cyber?

GPT-5.4-Cyber is a specialized version of OpenAI's model series, optimized for use in security operations. According to OpenAI, the model is developed to assist the defensive side in the increasingly escalating digital threat landscape — not for offensive purposes.

The program targets established players in cybersecurity: companies with existing security infrastructure and professional communities that can deploy the model in real-world operations. Details on selection criteria and the application process have not yet been fully disclosed.

OpenAI gives $10M to cyber defense — GPT-5.4-Cyber in battle against hackers

A rapidly growing market

The timing of OpenAI's initiative is no coincidence. According to market data from industry analysts, the global AI cybersecurity market was worth approximately $25.4 billion in 2024. Forecasts point to more than a quadrupling by 2032, with the market estimated to reach $114 billion — at an annual growth rate of over 20 percent.

$25.4 billion
AI Cybersecurity Market Value 2024
$114 billion
Estimated Market Value 2032

This growth is driven by a threat landscape where attackers themselves are using AI to create more sophisticated malware and phishing campaigns. The defensive side responds with its own AI systems that can analyze enormous amounts of data, detect anomalies, and respond much faster than human analysts.

OpenAI gives $10M to cyber defense — GPT-5.4-Cyber in battle against hackers

Strong competition for AI positions

OpenAI is far from alone in seeing the potential. Established cybersecurity companies have long integrated machine learning into their platforms. CrowdStrike reported a 22.2 percent year-over-year revenue growth in 2026 for its Falcon platform, which uses behavioral analysis for real-time attack detection. Darktrace employs self-learning AI, while Palo Alto Networks and SentinelOne offer autonomous response platforms across networks, cloud, and endpoints.

Microsoft contributes with Sentinel, a cloud-based SIEM system that correlates data from multiple sources to uncover complex attack sequences. Zscaler states it processes over 500 trillion security signals daily through its Zero Trust platform.

AI is now being used by both sides in the digital threat landscape — and the pace is accelerating

There are also examples of how powerful AI can be offensively: The Israeli startup Tenzai is reported by industry sources to have developed an AI hacker based on models from OpenAI and Anthropic, which outperformed 99 percent of 125,000 human participants in a series of «capture the flag» competitions — at a reported cost of around $5,000. These claims have not been independently verified but illustrate the potential asymmetric threat landscape.

What does this mean for Norwegian entities?

Norway has a growing security community, with actors in both public and private sectors who work closely with national and international threat management. OpenAI's program could represent a concrete opportunity for Norwegian security companies to gain access to one of the market's most advanced AI models at a subsidized cost — and at the same time contribute to shaping how such tools actually function in operational environments.

Who qualifies for the program, and whether Norwegian entities are already involved, is not yet known. OpenAI has not published a full list of participants.

Critical questions remain

A central question is governance and control: Who decides who gets access to GPT-5.4-Cyber, and what guarantees exist against misuse? OpenAI has previously been subject to criticism for balancing openness and security in the rollout of powerful models.

Additionally, it is worth noting that the model's actual performance in operational security environments has not yet been documented through independent tests. The program has only recently been announced, and results will emerge over time.