OpenAI’s Sora 2 video generator has unleashed a wave of legal controversies and creative disruption since its launch two weeks ago, with users creating unauthorized deepfakes of celebrities and copyrighted characters. The tool’s ability to generate realistic videos from simple text prompts is forcing a reckoning over copyright law, creative ownership, and the distinction between human artistry and AI-generated content.
The big picture: Sora 2 represents a pivotal moment where AI video generation capabilities have outpaced existing legal frameworks and ethical guardrails.
- The app hit over a million downloads within five days and topped iPhone app store charts before OpenAI implemented content restrictions.
- Users immediately began creating problematic content, including unauthorized videos featuring SpongeBob, Ronald McDonald, and even OpenAI CEO Sam Altman endorsing products he never actually endorsed.
- Legal scholar Sean O’Brien from Yale Privacy Lab warns that a “four-part doctrine” is emerging: only human-created works are copyrightable, AI outputs are “Public Domain by default,” humans bear responsibility for AI-generated infringement, and training on copyrighted data without permission is legally actionable.
Key legal challenges: Major entertainment companies and celebrities are pushing back against unauthorized use of their intellectual property and likenesses.
- The Motion Picture Association, which represents major Hollywood studios, issued a stern statement on October 6, with CEO Charles Rivkin declaring: “Videos that infringe our members’ films, shows, and characters have proliferated on OpenAI’s service and across social media.”
- Actor Bryan Cranston and SAG-AFTRA, the actors’ union, filed complaints after users created unauthorized videos using his likeness.
- OpenAI initially contacted Hollywood rights holders in September offering opt-out options, but industry leaders demanded the company take responsibility for preventing infringement rather than placing the burden on rights holders.
How the technology works: Sora 2 allows users to generate videos through simple text prompts, with concerning ease of misuse.
- Users can remix existing videos by providing new prompts, as demonstrated when the author created a fake endorsement video of Sam Altman in minutes.
- The system includes a “cameo” feature for users to upload their own likeness while supposedly blocking public figures.
- OpenAI has implemented some guardrails, rejecting prompts for Star Wars characters and other copyrighted content, but enforcement appears inconsistent.
What they’re saying: Industry experts are divided on whether AI video tools represent creative democratization or artistic theft.
- Veteran commercial illustrator Bert Monroy expressed concern about AI’s impact: “Now, with AI, the client has to think of what they want and write a prompt and the computer will produce a variety of versions in minutes with NO cost except for the electricity to run the computer.”
- Maly Ly, CEO of AI startup Wondr, offers a more optimistic view: “AI video is forcing us to confront an old question with new stakes: Who owns the output when the inputs are everything we’ve ever made? We’re not seeing creativity stolen; we’re seeing it multiply.”
- Attorney Richard Santalesa warns that OpenAI’s deep pockets make it a prime litigation target: “Copyright grants the owner various exclusive rights under US copyright law, including the creation of derivative works.”
Safety measures implemented: OpenAI has outlined five main safety themes in response to criticism.
- Consent-based likeness control through the “cameo” feature while blocking public figures.
- Intellectual property and audio safeguards with takedown request processes.
- Provenance initiatives including moving watermarks and C2PA metadata for content verification.
- Usage policies prohibiting privacy violations, fraud, harassment, and threats.
- Recourse mechanisms allowing users to report abuse for content removal.
The deepfake dilemma: Sora 2 intensifies existing challenges around distinguishing authentic content from AI-generated material.
- The technology enables creation of convincing deepfakes that can damage reputations or spread misinformation.
- Robin Williams’ daughter Zelda Williams publicly pleaded: “Stop sending me AI videos of dad…To watch the legacies of real people be condensed down to … horrible, TikTok slop puppeteering them is maddening.”
- Historical context shows photo manipulation predates AI, with examples including fabricated 1864 Civil War photos and Stalin having enemies airbrushed from official images.
What’s next: The legal and creative industries face an uncertain future as AI video generation becomes mainstream.
- Yale’s O’Brien predicts increased litigation as the legal framework crystallizes around AI-generated content liability.
- OpenAI maintains its tools are “designed to support human creativity, not replace it,” but critics question whether current safeguards are sufficient.
- The technology’s rapid adoption suggests that regardless of legal challenges, AI video generation will continue evolving and spreading throughout creative industries.
Are Sora 2 and other AI video tools risky to use? Here's what a legal scholar says