At the end of September 2025, OpenAI released Sora 2, a version of OpenAI’s video generation platform along with a companion Sora 2 app. Open AI touted the platform as being “more physically accurate, realistic, and more controllable than prior systems.” The Sora 2 landing page contained many videos generated by the platform, based on user requests made to the system.
Perhaps in anticipation of a sharp response by the media and entertainment industry, OpenAI CEO Sam Altman released a statement on October 3. “We have been learning quickly from how people are using Sora and taking feedback from users, rightsholders, and other interested groups…. First, we will give rightsholders more granular control over generation of characters, similar to the opt-in model for likeness but with additional controls.”
“Second,” Altman said, “We are going to have to somehow make money for video generation. … We are going to try sharing some of this revenue with rightsholders who want their characters generated by users.”
Altman also commented on engagement from users in Japan (it happens that Manga is one of the most pirated categories of creative content).
Media industry responds
The Motion Picture Association responded quickly. “‘Since Sora 2’s release, videos that infringe our members’ films, shows, and characters have proliferated on OpenAI’s service and across social media. While OpenAI clarified it will ‘soon’ offer rightsholders more control over character generation, they must acknowledge it remains their responsibility – not rightsholders’ – to prevent infringement on the Sora 2 service. OpenAI needs to take immediate and decisive action to address this issue. Well-established copyright law safeguards the rights of creators and applies here.”
Rights holders are responding too. For example, in mid-October, Variety reported that CMG Worldwide, an intellectual property firm, has contracted with Loti AI, a deepfake detection platform provider, to “safeguard” the likenesses of late entertainment celebrities. Variety positioned the deal as “a proactive move in the digital rights protection conversation as AI tools like Sora 2…make it increasingly easy to develop digital replicas of people who can no longer consent to the use of their likeness.”
What it means for trust
Sora 2 and other video generating AI platforms have become increasingly refined. Their first versions produced obvious errors, such as impossible trajectories of motion and multiple limbs on humans. The latest versions have been incrementally disposing of these errors.
OpenAI claims that “Sora 2 can do things that are exceptionally difficult—and in some instances outright impossible—for prior video generation models: Olympic gymnastics routines, backflips on a paddleboard that accurately model the dynamics of buoyancy and rigidity, and triple axels while a cat holds on for dear life.
“Sora 2 is … better about obeying the laws of physics compared to prior systems,” said OpenAI.
But if Sora 2 and its counterparts from other suppliers are so good at what they do, what can its generated content do with the reputations of public or political figures? Or with your own likeness?

Already, there is great concern over deepfakes and the concept of human agency – the rights and control that people have over their own identities and actions – and technology companies are already producing countermeasures. Nobody wants an AI-generated likeness of themselves committing crimes they couldn’t imagine.
Why it matters
Being no stranger to copyright enforcement lawsuits and surely aware of Hollywood’s recent legal initiatives against other AI platforms, OpenAI is paying attention and likely considers itself warned.
The media industry’s guarded but assertive response to this OpenAI situation is somewhat more guarded than other recent responses to the rights-infringing features of several generative AI platforms.
In June, The Walt Disney Company and Universal sued Midjourney in Federal Court for its ability to generate videos and images from copyrighted but unlicensed content. In September, Warner Bros filed its own lawsuit against Midjourney, through the same Court. In the spirit of efficient use of resources the Court deemed the two cases to be related.
Also in September, a privately-held Chinese company called MiniMax, whose Hailuo AI platform is pitched as “Hollywood in your pocket,” its parent company and one of its distributors was sued by business units of Disney, Universal and Warner Bros. Discovery.
It remains to be seen how closely OpenAI will work with the media industry to enforce legal use. As the creators and rights-holders, don’t expect this industry pressure to let up.
Further reading
Sora 2 is here. Product landing page. September 30, 2025. OpenAI
MPA issues statement on OpenAI’s recent release of Sora 2. Press release. October 6, 2025. by the Motion Picture Association (MPA)
CMG Worldwide teams with Loti AI to protect IP for estates of Burt Reynolds, Jimmy Stewart, Judy Garland and more. Article. October 14, 2025. by Jennifer Maas. Variety
MiniMax: Studios sue China-based “Hollywood in your pocket” video AI platform for copyright infringement. Article. September 16, 2025. by Steven Hawley. Piracy Monitor
Warner Bros. copyright suit against Midjourney is related to Disney/Universal suit, says Court. Article. September 8, 2025. by Steven Hawley. Piracy Monitor
Warner Bros. Discovery sues Midjourney AI platform provider for massive copyright infringement. Article. September 5, 2025. by Steven Hawley. Piracy Monitor
‘Piracy is Piracy’ – Studios sue Midjourney, whose AI platform derives images from copyrighted works. Article. June 12, 2025. by Steven Hawley. Piracy Monitor