Disney  ·  Featured  ·  Headline  ·  Movies  ·  News

Disney’s AI Deal with OpenAI May Force Google and Elon Musk to Take their Legal Case to the Supreme Court

December 12, 2025  ·
  W. D. W. Pro
Bob Iger and the OpenAI Logo

Bob Iger and the OpenAI logo - Photo Credit: CNBC Television YouTube; OpenAI

The Walt Disney Company and OpenAI have entered into a licensing partnership for generative artificial intelligence with Disney characters. The legalities of such a deal are likely to impact the future of entertainment forever.

U.S. copyright and trademark law were written long before generative AI, but they already supply most of the legal framework now being stress-tested by text-to-image and text-to-video systems. Copyright protects original expression fixed in a tangible medium (films, scripts, character art, music, etc.) and gives the rightsholder exclusive rights to reproduce, distribute, publicly perform/display, and critically for AI, prepare derivative works based on the original.

Anna Elsa and Olaf in Frozen Holiday Wish

Anna, Elsa, and Olaf in A Frozen Holiday Wish – Disney+

READ: EDITORIAL: Holiday Memories Are About The WHO, NOT the Where, When, or Whats

When generative systems ingest large volumes of copyrighted works during training, the central unresolved question is whether that copying is excused by defenses such as fair use, and whether particular outputs are “substantially similar” or otherwise function as unlicensed derivative works.

A parallel question arises on the output side: even if a model’s training is found lawful, outputs that recreate protectable expression can still trigger infringement claims under traditional “copying + substantial similarity” theories. Layered on top is the issue of whether AI outputs themselves can be copyrighted.

The U.S. Copyright Office has been explicit that copyright requires human authorship, and that purely AI-generated material is not registrable; protection may exist only for the human-authored contributions— such as selection, arrangement, or meaningful modification—while the AI-generated portions must be disclaimed.

Nick Wilde and Judy Hopps in Zootopia 2 Driving in a car

Nick Wilde and Judy Hopps in Zootopia 2 – YouTube, Disney

Trademark law, governed principally by the Lanham Act, addresses a different harm: consumer confusion about source, sponsorship, or affiliation. To prove infringement, a plaintiff generally must show it owns a valid mark and that the defendant’s use of a similar mark in commerce is likely to cause confusion. In generative AI, trademarks get implicated when outputs place famous brands or characters into commercial contexts (ads, product packaging, “official-looking” promos), or when platforms market prompts, templates, or model features in ways that suggest endorsement.

Even where confusion is hard to prove, famous marks can invoke dilution theories (blurring/tarnishment) and false endorsement claims, especially when content appears to trade on brand goodwill. As a practical matter, trademark exposure tends to grow when AI outputs are distributed at scale, monetized, or presented in a way that looks “official”… and Disney’s marks are among the strongest and most aggressively policed in the world.

Against that legal backdrop, the newly announced Disney–OpenAI partnership is best understood as a licensing and control strategy rather than a change in the law itself. Disney and OpenAI disclosed a three-year arrangement tied to a $1 billion Disney investment that allows Sora to generate short, user-prompted videos using a defined set of more than 200 Disney-related characters and associated elements (costumes, props, vehicles, environments), and the deal expressly excludes the use of real talent likenesses or voices.

Darth Vader at Star Tours

Darth Vader outside Star Tours – YouTube, Global Current

READ: The Rock Scores First Golden Globes Nomination for The Smashing Machine

Reporting also indicates the agreement is structured to permit character use while restricting the use of Disney IP for training: an important distinction because it draws a bright line between (a) licensing for outputs within controlled creative rails and (b) licensing or permission for training ingestion, which is at the center of many lawsuits.

In effect, Disney appears to be betting that the cleanest way to participate in generative content is to authorize a bounded, brand-safe “sandbox” for fans while preserving the ability to sue—or credibly threaten to sue—when its works are used to train models without permission or when outputs cross into confusion, dilution, or infringement.

That dual-track approach is underscored by Disney’s reported cease-and-desist letter to Google delivered the same day as the OpenAI announcement. Reuters, citing CNBC, reported that Disney sent Google a cease-and-desist letter, and additional coverage has described Disney’s allegation that Google used Disney content without authorization in connection with AI systems.

HERBIE

The robot H.E.R.B.I.E. at Disneyland – Disney Parks Blog

While the full letter is not publicly reproduced in the most authoritative reporting, the posture is consistent with Disney’s broader strategy: partner where it can set terms and guardrails, and confront parties it believes are training or deploying generative tools using Disney works without permission.

The legal significance is less about creating new rights and more about positioning—Disney can argue that lawful generative use is feasible through licensing and safeguards, which may matter rhetorically (and sometimes evidentially) in litigation where AI companies lean heavily on “industry inevitability” arguments.

Midjourney’s litigation problems illustrate why this matters.

In June 2025, Disney, NBCUniversal, and DreamWorks filed a major copyright infringement lawsuit against Midjourney in federal court, alleging that Midjourney’s systems enable the creation of images incorporating the plaintiffs’ protected characters and that the company has engaged in direct and secondary infringement tied to both training and output generation.

Moana Live Action

Moana in the live action movie – YouTube, Disney

READ: Disney Accuses Google of “Massive” AI Copyright Abuse

Separate reporting and commentary throughout 2025 has tracked the case as part of a broader wave of “training data” disputes, in which rightsholders argue that mass ingestion of copyrighted works is not excused by fair use and that outputs that recreate characters and scenes are actionable.

Midjourney has also faced additional pressure from other major rightsholders; for example, public reporting in 2025 describes other studio complaints alleging systematic infringement. These cases matter because they put courts on the path toward answering questions that Congress has not yet resolved: what constitutes actionable “copying” in training, whether model weights can be treated as infringing derivatives, what level of similarity in outputs crosses the line, and how secondary liability should apply when platforms provide tools that predictably generate protected characters.

The Disney–OpenAI deal could influence how these disputes evolve in three practical ways. First, it strengthens the “licensing market” argument: if a major rightsholder is demonstrably willing to license characters for generative outputs under negotiated constraints, plaintiffs may argue that unlicensed model developers are harming an actual or emerging market for authorized AI uses, a factor that can weigh against fair use in some contexts. Second, it raises the standard for “reasonable safeguards.”

Loki holds the multiverse together

Loki at the end of Loki Season 2 – Disney+

Disney and OpenAI publicly emphasize controlled character sets and exclusions (such as talent likeness/voice), which may become an implicit benchmark when regulators, courts, or juries evaluate whether other platforms took adequate steps to prevent infringement and consumer confusion. Third, it may accelerate a bifurcation in the industry: licensed, brand-safe generative experiences (where trademark and copyright permissions are explicit) on one side, and open-ended model development and distribution (where training provenance and output policing are contested) on the other—meaning the hardest fights shift toward training-data legality and platform liability rather than whether generative content can exist at all.

None of this eliminates legal uncertainty.

Courts are still working through the foundational questions in generative AI litigation, and the Copyright Office’s position that purely AI-generated material is not copyrightable adds a separate wrinkle: even where a user makes a Disney-themed AI clip inside an authorized tool, the user may have limited—or no—copyright in the output absent meaningful human authorship, while Disney’s underlying copyrights and trademarks remain enforceable.

Elon Musk

Elon Musk via Real Time with Bill Maher YouTube

In short, the partnership does not rewrite copyright or trademark law, but it does alter the battleground by demonstrating a high-profile licensing model… and by sharpening the contrast between “authorized generative play” and the unlicensed training and output ecosystems that Disney is simultaneously challenging through cease-and-desist tactics and active litigation.

So who is ready for a battle all the way up to the Supreme Court? If Disney tries to enforce an exclusivity with OpenAI, Google and xAI (Elon Musk’s Grok) may be prepared to refuse all the way up to the highest court in the land.

How do you feel about Disney and OpenAI? Sound off in the comments and let us know your thoughts! 

UP NEXT: Kali River Rapids Reopening Mid December to Boost Sparse Animal Kingdom Attraction List

Author: W. D. W. Pro
Founder, Publisher, CEO WDW Pro is an opinionated commentator on all things Disney and Entertainment. He runs one of the most-viewed pop culture news channels on YouTube with many millions of views every month. First becoming well-known on WDWMagic.com, the author was brought on to work at Pirates and Princesses. Pro has previously released exclusive details on a variety of rumors and leaks before they were made public. Some exclusives have included breaking info on new Epcot attractions, detailing the light saber experience at the Star Wars hotel, reporting a Harrison Ford injury severity before anyone else, revealing Hugh Jackman was coming to the MCU, Storm would be linked with Wakanda and more. WDW Pro has written articles viewed by millions of readers while maintaining an 87% accuracy rating for revealing "insider" information in 2020. In 2021, the author had a better than 90% accuracy on reported leaks and rumors. Pro joined That Park Place on June 22nd, 2021. The author's accolades include being featured on The Daily Wire, cited by Timcast, numerous references by YouTube personalities, as well as having material tweeted by Dr. Jordan Peterson. WDW Pro is honored, and grateful, while hoping to make the world a better place. In 2023, a third party audit found Pro's accuracy for rumors and scoops to be 92.5%. SOCIAL MEDIA: X: http://x.com/wdwpro1 YouTube: https://www.youtube.com/@WDW_Pro EMAIL: wdwpro@thatparkplace.com
Join the Conversation
Subscribe
Notify of
1 Comment
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Vallor

I go back and forth on AI. I know it will replace me in my job one day, but I use it pretty heavily in my day-to-day work in ways that are novel to me and my company and industry and it is an amazing force multipliers. That’s why I struggle when it comes to figuring out how AI works in a more standard creative industry.

Is a work AI generated when the prompts used to create the piece were completely novel? If I feed the AI a scenario (say to create a picture) and ask for feedback and it suggests cell-shading the work, which I ask it to do, is that still unprotected because there was any amount of AI modification. What if all I provide are prompts and corrections? How much, and what type of contribution, before it is an AI creation?

Does a movie director explaining the look they want for their movie to a concept artist infringe if he says “The bad guys have polygonal shapes with lots of black and white sterile materials for the surfaces, sort of like the Empire in Star Wars” the output is protected, but if I type the same thing to an AI agent and refine the image, it now it suddenly loses protection?

If I go to a real-life illustrator and ask that person to create a comic strip using the same sort of art-style as Calvin and Hobbes but have totally different characters, scenes, side characters, and storylines, am I or that artist implicated, as infringers of Bill Watterson’s Calvin and Hobbes?

Or if I write a story that is an homage to the Lord of the Rings at what point do I infringe? Dennis McKernan’s Iron Tower trilogy is almost beat for beat the same as LotR but did the Tolkien estate sue him for infringement? Good in its own right but even the author admits that it is almost-but-not-quite LotR.

And if I use AI to fill in story gaps or make corrections to a manuscript I created, does that mean it is now an “AI work” and not protected?

These are all questions the 70 or 80 year old judges and heads of departments are not prepared or capable of tackling. Can you imagine trying to explain AI enhanced creation and why it should be protected? so they’ve been kicking the can down the road for years.