Since generative AI became widely available, attack figures have exploded. According to research cited by Digi.no, Thomas Zuliani, former IT security director in several Danish companies, describes the greatest threat not as sophistication in isolation — but the enormous scale attackers can now operate at.
From 16 hours to five minutes
Previously, a convincing, personalized phishing attack required significant human effort. Now, AI tools can produce an effective phishing email in about five minutes — which is equivalent to a task that normally takes human experts 16 hours to complete. This represents an efficiency gain of nearly 192 times, according to available research data.
The consequence is that threat actors can now send out 10,000 unique, highly personalized emails at a cost equivalent to a single traditional spear phishing attack.

AI automates reconnaissance and content
Many still believe that phishing is easy to expose through poor grammar or strange phrasing. That image is outdated. AI models scan open sources — social media, leaked databases, company news — and use this information to produce messages that are grammatically flawless and tailored to the recipient.
Stephanie Carruthers, IBM's Global Head of Cyber Range and Chief People Hacker, is direct in her assessment: “With very few instructions, an AI model can write a phishing message tailored specifically for me. It's terrifying,” according to the research material.
AI models help attackers clean up their messages — making them more concise and urgent, and something far more people fall for.

Deepfakes are no longer a future scenario
The technological development does not stop at text. Trade in deepfake tools has increased by 223 percent, and deepfakes now account for 6.5 percent of all scam attacks — an increase of over 2,100 percent since 2022.
The consequences can be massive. In February 2024, a finance employee at the engineering company Arup was tricked into transferring 25 million dollars to scammers after participating in a video meeting where AI-generated deepfakes of the company's CFO and other executives participated — seemingly live.
Filters are lagging behind
A serious problem for businesses is that traditional security tools to a limited extent detect AI-generated content. Krishna Vishnubhotla, VP of Product Strategy at Zimperium, points out that many of the common phishing filters are simply not activated because the messages lack the classic warning signs.
Nor is it only untrained employees who are affected. Nearly two-thirds of IT and security managers have themselves admitted to falling for phishing attempts. AI-powered spear phishing attacks, according to research material, have a success rate of 47 percent against trained security experts.
What are security managers doing now?
According to Digi.no, experienced security leaders like Thomas Zuliani are preparing by coming to terms with the fundamental change in attackers' capabilities — not just technologically, but operationally. Scale is the new problem: it's not one sophisticated attack to be stopped, but potentially tens of thousands simultaneously.
Researcher Mark Stockley from Malwarebytes, cited by MIT Technology Review, believes the direction is clear: “We are going to live in a world where the majority of cyberattacks are carried out by agents. The only question is how quickly we get there.”
For Norwegian and Nordic businesses, the picture is the same as globally: AI lowers the threshold for who can carry out advanced attacks and raises expectations for what a defense must withstand.
