OpenAI has published a detailed review of the security architecture behind Sora 2 and the associated Sora app. According to the company, security was not added as an afterthought, but integrated from the ground up in both the model and the platform's social features. Nevertheless, there is no shortage of critical voices.
Watermarking and traceability to reveal AI content
One of the key measures highlighted by OpenAI is the use of visible watermarks combined with invisible C2PA metadata — an industry standard for tagging content as AI-generated. In addition, the company internally has search tools that can trace videos back to the Sora model, according to the company's own description of the platform.
Critics, however, point out that visible watermarks can already be removed by readily available third-party software, which undermines the effectiveness of this type of marking as the sole line of defense.
Watermarks can be removed by third-party programs — the deepfake risk remains significant

Strict control over the use of real people's faces
One of Sora's more talked-about features is the ability to create videos based on uploaded images of real people. OpenAI requires users to confirm they have consent from those depicted. A separate «characters» function also gives individuals control over their own appearance and voice — including the ability to revoke permissions at any time.
Generation of videos depicting well-known public figures is generally blocked, with the exception of those who have registered themselves in the characters system.
Despite this, security experts believe that the face upload function still represents a significant risk, especially for minors and individuals who are unaware that images of them may be used.

Measures for young users
The Sora platform includes specific protections aimed at teenagers. Adult profiles are not recommended for young users, and adults cannot initiate direct messages with minors. Teenagers also have standard limitations on continuous scrolling in the feed. Parents can manage these settings via ChatGPT parental controls.
How competitors stand
OpenAI's focus on security does not happen in a vacuum. Competitors like RunwayML, Pika Labs, and Google Veo all have their own arrangements, but with significant differences.
RunwayML is highlighted as relatively robust in data security, with encryption during both transmission and storage, as well as compliance with GDPR and US privacy regulations. The company collaborates with NCMEC on reporting CSAM. A clear weakness, however, is the absence of parental controls, which makes the platform best suited for adults.
Pika Labs, on the other hand, has been criticized for vague privacy policies and weaker moderation compared to competitors. The functionality that allows users to generate dramatic effects with real people's appearances — such as «exploding» or «melting» representations — has drawn particular attention regarding ethical misuse.
Experts' assessment
Research and security communities point out that even the most thoroughly developed security systems have limitations when it comes to video generation. Realistic AI video makes it especially difficult for children and young people to distinguish between real and generated content, creating new challenges related to identity misuse and consent.
The source material from the OpenAI blog and independent analyses of competitors collectively paint a picture of an industry moving in the right direction, but where regulatory and technical gaps remain significant.
OpenAI's presentation of Sora 2 is taken from the company's own blog and should be read in light of it being the company's self-presentation — not an independent security audit.
