What has long been referred to as a niche or future problem has now developed into an ongoing crisis in school environments worldwide. A thorough review conducted by the American technology magazine WIRED in collaboration with the analysis platform Indicator documents that nearly 90 schools and more than 600 students are already directly affected by AI-generated fake nude images – so-called deepfake nudes. And the problem is growing.
What is the Scope?
The figures are disheartening. According to research from Thorn – a nonprofit organization working against child sexual exploitation – one in 17 teenagers, equivalent to about 6 percent, report having been targeted with such images. Among those who have created such content, one in three admit to having shared the images with fellow students.
Globally, it is estimated that 1.2 million children had their images manipulated into sexually explicit content in a single year, across 11 countries. In some regions, the incidence is as high as one in 25 children. Around 40 to 50 percent of students in surveys say they are aware that deepfakes are circulating in their school, according to sources cited by WIRED.

Schools Are Unprepared
A consistent finding is that schools rarely have guidelines in place when these incidents occur. Riana Pfefferkorn, a policy researcher at Stanford University, points out to WIRED that schools are typically caught off guard, and that many teachers and principals are not even aware of the existence of so-called 'nudify' apps – apps that can generate fake nude images of anyone with simple keystrokes.
Deepfake abuse is abuse, and there is nothing fake about the harm it causes. — UNICEF
Professor Sandra Wachter of the Oxford Internet Institute is sharp in her criticism of a reactive approach: waiting until users are caught retrospectively is irresponsible, as the images are already out in the world and victims cannot undo what has been done, she has stated according to the research basis used in the case.
Victims are predominantly girls. The consequences are severe: psychological harm, damage to reputation, and privacy violations. Some students are so severely affected that they are forced to change schools.

Technology Outpaces Legislation
UNICEF emphasizes in its statements that children cannot wait for legislation to catch up with technological development. This is a point that hits hard: in many countries, including Norway, the legal system is still in the process of clarifying how existing legislation applies to AI-generated sexual content involving minors.
Cybersecurity expert Junade Ali argues that AI models can be adapted to reduce the risk of harmful content being produced, and that secondary AI systems can monitor and filter requests. He believes that collaboration between regulators and developers must happen early in the process – not after the harm has occurred.
Detection Tools Exist – But Are Not Widespread
Technical solutions that can help schools do exist, even if the field is immature. Tools like GAT Shield are designed to monitor school devices and alert administrators in real-time when deepfake websites are used. Platforms like Sensity AI report detection accuracy of between 95 and 98 percent for manipulated images and videos, according to the research material in the case.
Nevertheless, the threshold for adopting such tools is high for many schools – both in terms of cost and expertise. Melissa Stroebel, research director at Thorn, states it directly: think about how many students are in an average classroom. That's one affected student per classroom. This means that every single community is experiencing this, and it is high time that we as a society respond, WIRED relays her message.
What Should Parents, Schools, and Authorities Do?
Experts agree that the solution requires effort on several levels simultaneously:
For schools: Establish clear guidelines and preparedness plans before an incident occurs. Educate teachers and principals on what deepfake tools are and how they are used.
For parents: Talk openly with children and teenagers about what deepfakes are, and that creating or sharing such content is abuse – regardless of whether the images are 'real' or not.
For authorities: Clarify legislation, and ensure that the production and distribution of AI-generated sexual content involving minors is clearly criminalized. Support schools with resources for prevention and management.
The crisis is global. Norwegian schools and Norwegian youth are not shielded from the apps, websites, or social mechanisms driving the spread. According to WIRED and Indicator's analysis, the problem shows no signs of abating on its own.
