In 2025, entertainment, advertising, and games increasingly rely on AI to recreate voices, faces, and styles of artists who have died. Yet the legal risks of using AI to resurrect posthumous creator likenesses are more complex than most teams expect, spanning rights of publicity, copyright, contracts, and consumer protection. One misstep can trigger injunctions, reputational harm, and costly settlements—so what should you check before you press “generate”?
Postmortem publicity rights and likeness law
The first legal question is often simple: who controls the deceased creator’s identity? Many jurisdictions recognize a “right of publicity” (or personality right) that can extend after death, allowing heirs or an estate to control commercial uses of a person’s name, image, voice, signature, or other identifying traits. For AI recreations, the risk rises because outputs can be hyper-realistic and used at scale.
Key risk patterns include:
- Unlicensed commercial use: Using a recreated voice in an ad, or a digital double in a game, can be treated as a commercial exploitation of identity.
- Overbroad “tribute” assumptions: A project framed as a tribute can still be commercial if it sells tickets, subscriptions, or promotes a brand.
- Jurisdiction shopping by claimants: Estates may sue where rights are strongest or where the work is distributed, especially if your content targets audiences in multiple regions.
Practical takeaway: identify which law applies where you distribute, determine whether postmortem rights exist and for how long, and then map who has authority to license. When you cannot confirm rights ownership, treat the project as high-risk until proven otherwise.
Likely follow-up: What if the person is not famous? Publicity claims can still arise if the person is identifiable and the use is commercial. Fame increases damages risk, but identifiability and commercial intent are usually the core issues.
Copyright, voice cloning, and derivative work pitfalls
AI resurrected performances can trigger copyright issues even when a likeness license exists. Copyright focuses on protected expression: recordings, audiovisual works, scripts, choreography, and some stylized artistic elements. Publicity rights focus on identity. You often need both.
Common copyright traps for posthumous AI projects include:
- Training data contamination: If a model is trained on copyrighted recordings, films, or images without proper rights, rights holders may argue infringement or seek remedies tied to the use of their works in model development and outputs.
- Sound-alike reconstruction: Even if a voice is not protected as “identity” in a given jurisdiction, a recreated performance may be substantially similar to a copyrighted sound recording or composition arrangement.
- Style vs. content confusion: “Style” is often not protected, but specific protected elements are. A model that reproduces recognizable phrases, melodies, scenes, or compositions can cross the line quickly.
To reduce exposure, separate the legal analysis into: (1) rights to input materials (training, fine-tuning, reference clips), (2) rights to outputs (the final audio/video/image), and (3) rights to the performer identity. This helps teams avoid a false sense of security from a single license.
Likely follow-up: If we use “clean-room” voice cloning with new scripts, are we safe? You may lower copyright risk, but publicity/likeness risk can remain. You also may face false endorsement and consumer protection issues if marketing implies estate approval.
Estate licensing agreements and chain-of-title risks
Even where licensing is possible, estates are not always a single decision-maker with clear authority. Chain-of-title problems can surface late, after a campaign is live. In 2025, sophisticated estates and rights administrators expect documentation that matches industry standards used for music and film clearances.
High-impact chain-of-title risks include:
- Competing heirs or representatives: Multiple parties may claim authority to license the likeness, voice, or archival materials.
- Prior exclusive deals: A prior contract may grant exclusive rights for digital uses, virtual performances, or merchandising.
- Scope mismatch: A license for “promotional use” may not cover synthetic performance, new dialogue, or interactive experiences.
- Moral rights and approval clauses: Some contracts require approval for edits, context, or associations—even if a general license exists.
What “good” looks like: a signed license that clearly defines (a) permitted uses (ads, film, game, social), (b) territories and platforms, (c) term, (d) whether new lines, new songs, or new scenes are permitted, (e) whether the AI model itself can be retained and reused, and (f) approval rights over scripts, storyboards, or prompts. Include explicit language for synthetic voice, digital doubles, and generative outputs rather than relying on legacy “name and likeness” boilerplate.
Likely follow-up: Should we license the model or the output? Ideally both. The output license addresses distribution; the model license controls future reuse, security obligations, and whether the estate can audit or require deletion.
Defamation, false endorsement, and consumer protection
AI resurrection can create statements the person never made or associations they never supported. That can trigger defamation or “false light” claims (where recognized), plus false endorsement and unfair competition claims if consumers are misled. Regulators and platforms also scrutinize deepfakes more aggressively, particularly in advertising and political contexts.
Risk spikes when:
- The recreated creator “endorses” a product that conflicts with their known values or past contracts.
- The content implies official authorization (“approved by the estate,” “official comeback”) without accurate substantiation.
- Edits change meaning: Even if you use archival audio, selective editing combined with AI filler can create a misleading composite.
Mitigations that hold up better than vague disclaimers:
- Clear, prominent disclosures that the performance is synthetic and that the estate approved it (only if true).
- Script and prompt governance that prohibits political messaging, medical claims, and endorsements unless specifically licensed and reviewed.
- Context controls: restrict where assets can appear (no user-generated remixing, no open-ended chat outputs using the persona) unless you can moderate effectively.
Likely follow-up: Is a disclaimer enough? A disclaimer helps but rarely cures a misleading overall impression. If the main creative choices imply endorsement, you still face exposure, especially in advertising.
Privacy, biometric data, and deepfake compliance
Resurrecting a likeness often involves processing biometric identifiers: face geometry, voiceprints, motion capture, and other measurements. Depending on where you operate, this can trigger biometric privacy rules, consent requirements, and data security obligations. Even if the person is deceased, data about living people (stand-ins, relatives, archival interviewers, background performers) can be implicated through collection, labeling, and model training practices.
Compliance issues you should expect in 2025 include:
- Notice and consent for scanning, motion capture, or voice capture used to build a digital double.
- Data minimization: collecting more biometric data than needed increases breach and enforcement risk.
- Retention and deletion: keeping a reusable “creator model” indefinitely is hard to justify without explicit contractual permission and robust controls.
- Security: a leaked voice model can enable fraud or impersonation, creating downstream liability and reputational damage.
Best practice: treat voice and face models as sensitive assets. Apply restricted access, watermarking where feasible, logging, incident response procedures, and contractual limits on subcontractors. Also verify platform policies; some distribution platforms require labeling or prohibit certain synthetic media uses without proof of rights.
Likely follow-up: What about “publicly available” footage? Public availability does not automatically grant rights to extract biometric identifiers or use the identity commercially. You still need a legal basis and a rights plan.
Risk management: clearance workflows, disclosures, and insurance
Teams move faster when they implement a repeatable clearance workflow rather than improvising on each project. A strong workflow also supports Google’s EEAT expectations for credibility: you can show your process, document decisions, and reduce harmful outcomes.
A practical clearance and governance checklist:
- Rights map: identify publicity/likeness rights holder, copyright owners of source materials, and any guild/union or contractual constraints tied to past works.
- Use-case definition: specify whether the AI output is a cameo, a lead performance, interactive chat, or user-remixable content—each has different risk.
- Model hygiene: document training sources, ensure licenses cover training and outputs, and keep audit trails for datasets and prompts.
- Approval gates: require estate review of scripts/storyboards and final renders when the likeness is central to the piece.
- Disclosure standards: adopt consistent labels for synthetic media, and align marketing copy with actual permissions.
- Insurance and indemnities: evaluate media liability/E&O coverage for deepfake and AI claims; negotiate clear indemnity boundaries with vendors and agencies.
Vendor management matters. If you outsource model creation, your contract should address: ownership of the model and weights, confidentiality, security controls, reuse prohibitions, deletion at end of term, and warranties about non-infringing training data. Without these clauses, you may inherit the vendor’s weak practices and still be the main target in litigation.
Likely follow-up: Does “fair use” solve this? Fair use (or similar defenses) can be fact-specific and uncertain. It is a litigation defense, not a permission slip—especially risky for advertising or high-profile entertainment releases. Treat it as a last-resort argument, not a business plan.
FAQs
Can we use an AI-generated version of a deceased creator if we never mention their name?
If the creator is still identifiable by voice, face, or distinctive traits, you can face publicity, false endorsement, or unfair competition claims. Avoiding the name reduces risk but does not eliminate it.
Who owns the rights to a dead celebrity’s voice or face?
It depends on the jurisdiction and the estate plan. Rights may belong to an estate, heirs, or an assigned rights company. You must confirm chain-of-title and whether postmortem publicity rights exist where you distribute.
Do we need permission to train an AI model on old interviews or films?
Often yes. Training may implicate copyrights in the footage and recordings, and it can raise biometric/privacy issues. Secure licenses that expressly cover AI training and resulting outputs.
Is a “tribute” concert or documentary safer than an advertisement?
Generally, ads carry higher endorsement and consumer protection risk. But paid ticketed events and monetized documentaries are still commercial and can trigger publicity and copyright claims without proper permissions.
What should a posthumous likeness license include for AI projects?
At minimum: defined synthetic uses, territories, platforms, term, approval rights, whether new dialogue/performance is allowed, whether the model can be reused, disclosure requirements, security controls, and deletion/retention rules.
Can we rely on platform deepfake labels instead of our own disclosures?
No. Platform labels help, but you remain responsible for accurate marketing and compliance. Use your own clear disclosures and ensure they match the actual permissions and creative context.
AI can extend a creator’s legacy, but it can also create legal exposure that spreads across multiple rightsholders and jurisdictions. The safest approach in 2025 is to treat posthumous digital doubles like major talent: clear publicity rights, clear copyrights, document chain-of-title, and build strong disclosure and security controls. If you cannot prove permission, don’t publish—fix rights first.
