Understanding FTC Disclosure Requirements For AI-Generated Likenesses is now a practical necessity for brands, creators, and agencies using synthetic faces, voices, or cloned personas in ads. In 2025, consumers expect clarity, and regulators expect proof. This guide explains what the FTC looks for, how to disclose effectively across formats, and how to document your decisions—before a campaign goes live. Ready to reduce risk?
FTC disclosure rules for AI likenesses: what the agency expects
The Federal Trade Commission enforces truth-in-advertising standards under Section 5 of the FTC Act, which prohibits unfair or deceptive acts or practices. When you use an AI-generated likeness—such as a synthetic spokesperson, a voice clone, or a digitally created “endorser”—the central question is simple: could the ad mislead a reasonable consumer about who is speaking, what is real, or why a claim is being made?
From a compliance perspective, AI doesn’t create a new category of advertising law. It changes how easily consumers can be misled and how quickly content spreads. That makes clear, timely disclosures more important, not less.
In practice, the FTC will focus on:
- Identity transparency: whether viewers might believe a real person participated when they did not, or whether a real person’s likeness was simulated.
- Endorsement authenticity: whether an endorsement implies real experience or opinions that were never formed by a real endorser.
- Material connections: whether the relationship between the advertiser and the “endorser” (real or synthetic) is clearly disclosed.
- Substantiation: whether objective claims (performance, savings, “clinically proven,” typical results) are supported by evidence regardless of who delivers the message.
If your creative could cause a consumer to think “a real person said this,” “a real expert reviewed this,” or “this is an authentic testimonial,” you should assume a disclosure is required and design it to be noticed.
AI-generated spokesperson disclosure: when you must clearly label synthetic media
A disclosure is most critical when an AI-generated likeness changes the meaning consumers take from the content. Labeling becomes essential in common situations such as:
- Synthetic celebrity or public figure lookalikes/soundalikes: even if you avoid a name, the implication can still be misleading.
- Digital “founder,” “doctor,” or “customer” testimonials: consumers may assume these are real people describing real experiences.
- Voice cloning for an executive, creator, or influencer: audiences may believe the person actually recorded the message.
- “Man on the street” or interview-style formats: realism cues increase deception risk.
Ask two follow-up questions that often determine whether your disclosure is adequate:
- Would the audience reasonably care? If “this person is real” affects trust, credibility, or purchase decisions, it’s material.
- Is the synthetic nature obvious without explanation? If not obvious, you need clear labeling. If it’s obvious, you may still need labeling when the content implies an endorsement, expertise, or typical results.
What should the disclosure say? Use plain language that matches the risk:
- Low ambiguity: “AI-generated spokesperson.”
- Voice cloning: “Voice is AI-generated.” or “AI voice recreation.”
- Simulated real person: “This is a digital recreation.”
- Fabricated testimonial: “Dramatization. Not a real customer.” or “Composite depiction; not an actual testimonial.”
Be careful with vague labels like “enhanced” or “digitally edited.” If the key issue is that the person is not real (or didn’t say the words), say that directly.
Material connection disclosure for AI influencers: endorsements, sponsorships, and incentives
The FTC’s endorsement principles apply whether the endorser is a human influencer, a virtual influencer, or a brand-owned AI persona. If there is a material connection—payment, free product, affiliate commissions, ownership, employment, or other benefits—it must be disclosed clearly and close to the endorsement.
This matters for AI-generated likenesses because advertisers sometimes treat synthetic characters as “in-house creative.” Even then, consumers can still interpret statements as independent opinions. If the persona is brand-owned or the message is sponsored, make that relationship obvious.
Use disclosures that reflect the relationship:
- Sponsored content: “Ad,” “Sponsored,” or “Paid partnership.”
- Affiliate links: “I earn a commission from qualifying purchases.” (or equivalent, placed before the link or at the top of the post).
- Brand-owned persona: “Brand-created virtual character” or “This character is operated by [Brand].”
Two common mistakes to avoid:
- Hiding disclosures in a profile or “about” page: each endorsement needs its own clear disclosure where people see it.
- Using unclear shorthand: if the audience won’t understand it instantly, it’s not effective.
Also remember: even with a perfect sponsorship disclosure, you must still avoid misleading performance claims. A sponsored AI character cannot “testify” to experiences that never occurred in a way that implies they did.
Clear and conspicuous disclosures: placement, proximity, and format for AI content
The FTC evaluates whether a disclosure is clear and conspicuous. That means it must be difficult to miss and easy to understand on the device and platform where consumers encounter it. AI-generated likeness disclosures should follow the same usability logic as pricing, limitations, and endorsement disclosures.
Apply these practical rules across formats:
- Proximity: place the disclosure near the claim or the synthetic likeness, not separated by scrolls, clicks, or long captions.
- Prominence: use readable size, high contrast, and enough on-screen time for video.
- Plain language: “AI-generated” beats “synthetic media technology.”
- Repetition when needed: if the ad is long or the AI character appears multiple times, repeat the disclosure.
- Unavoidability: don’t place the disclosure behind “more” truncation, in a fast-scrolling ticker, or only in end cards.
Channel-by-channel examples that usually meet expectations:
- Short-form video: On-screen text “AI-generated spokesperson” within the first seconds, plus a spoken disclosure if the format is audio-forward.
- Audio ads/podcasts: A spoken line early: “This message uses an AI-generated voice.” Avoid burying it at the end.
- Static social posts: Place “AI-generated image” at the beginning of the caption when the image could be mistaken for a real photo, and consider overlay text on the image for high-risk scenarios.
- Live streams: A pinned comment and periodic verbal reminders, especially when an AI avatar interacts in real time.
- Landing pages: Keep the disclosure near the hero media and any testimonial-like statements, not only in a footer.
Design your disclosure to answer the consumer’s immediate question: “Is this a real person, and did they actually say this?” If your disclosure doesn’t resolve that, revise it.
Deceptive advertising risk with deepfakes: claims, testimonials, and impersonation boundaries
AI-generated likenesses increase the risk of deception in three areas that routinely trigger enforcement attention: implied endorsements, testimonials and typicality, and expert/authority claims.
1) Implied endorsements and impersonation
If an ad implies that a real person endorsed a product—by showing a realistic likeness, mimicking a recognizable voice, or referencing a person’s brand identity—your disclosure must prevent confusion. In high-risk cases, a generic “AI-generated” label may not be enough; you may need to avoid the creative entirely or clearly state “Not affiliated with” if the ad could be read as an association claim.
2) Testimonials and consumer experiences
“Testimonials” delivered by AI characters are risky when they sound like personal experiences (“This cleared my skin in a week”). If no real consumer had that experience, the ad can mislead even if you label the speaker as AI. A safer approach is to:
- Use AI characters for demonstrations of features, not personal outcomes.
- When discussing results, use substantiated, typical outcomes and avoid implying individual experience.
- If you present results narratives, label them clearly as dramatizations and ensure your performance claims still have evidence.
3) Expert claims
AI-generated doctors, scientists, or “reviewers” can mislead if they imply credentials or independent evaluation. If you use an AI expert character, avoid real-world credential cues (licenses, institutional branding) unless you can substantiate the underlying expert review by qualified professionals—and disclose that accurately.
One more boundary to remember: disclosures do not “cure” everything. If the core message is misleading—such as fabricated proof, fake reviews, or unsupported health claims—the FTC can still view the ad as deceptive.
Compliance checklist and documentation: practical steps for marketers and creators
Good compliance is repeatable. Build a workflow that makes disclosure decisions consistent and provable.
Pre-launch checklist
- Identify the synthetic elements: face, voice, body, background, “before/after,” quotes, reviews.
- Assess consumer interpretation: could people think the likeness is a real person or a real statement?
- Map required disclosures: AI-generated label, sponsorship/material connection, dramatization, results qualifiers.
- Confirm claim substantiation: gather studies, test results, or other competent evidence for objective claims.
- Place disclosures in the creative: not just in campaign notes or a terms page.
- Test on mobile: verify readability, timing, truncation behavior, and audio clarity.
- Approve variants: each cutdown, language version, and placement can change what is “clear.”
Documentation to keep (EEAT in action)
- Creative source records: prompts, model/vendor used, and edit history showing what is synthetic.
- Consent and rights: releases for any real person whose likeness or voice was used or referenced.
- Substantiation file: the evidence supporting claims, plus reasoning for why claims are not misleading.
- Disclosure rationale: a brief memo explaining what you disclosed, where, and why it is clear and conspicuous.
- Influencer/agency instructions: written guidance, examples, and enforcement steps if disclosures are missed.
Operational follow-through
- Monitor comments and reposts for confusion and correct quickly when audiences misunderstand what is AI-generated.
- Audit evergreen ads periodically; platform UI changes can make once-clear disclosures less visible.
- Train creative and media teams together so disclosures are planned, not patched at the end.
If you want a simple decision rule: when realism increases, disclosure must become earlier, plainer, and more prominent.
FAQs
-
Do I have to disclose that an image or video is AI-generated in an advertisement?
If the AI-generated element could mislead consumers about what is real, who is speaking, or whether a real person endorsed the product, you should disclose it clearly and close to the media. The more realistic the likeness, the more important the label becomes.
-
Is “AI-assisted” a sufficient disclosure for an AI-generated spokesperson?
Often no. “AI-assisted” can be too vague when the material fact is that the spokesperson or voice is synthetic. Use direct language such as “AI-generated spokesperson” or “AI-generated voice” when that is what consumers need to understand.
-
Do virtual influencers need to disclose sponsorships?
Yes. Material connections must be disclosed regardless of whether the endorser is human or virtual. Use clear labels like “Ad” or “Sponsored,” and ensure the disclosure appears where consumers will notice it.
-
If I disclose “AI-generated,” can I use a fake testimonial?
Disclosure does not automatically fix a misleading testimonial. If the message implies real consumer experience or typical results that are not substantiated, it can still be deceptive. Consider using demonstrations, clearly labeled dramatizations, and substantiated claims instead.
-
Where should the disclosure appear in short-form video ads?
Place an on-screen disclosure early (within the first seconds) and keep it readable. If the ad is voice-led or audio-only, add a spoken disclosure. Repeat the disclosure if the ad is long or the AI character reappears.
-
What records should my team keep to support compliance?
Keep evidence for advertising claims, copies of each ad variant, disclosure placement screenshots, vendor/model details, and signed rights/consent documents. Also keep written guidance given to influencers or agencies and notes explaining why disclosures are clear and conspicuous.
FTC compliance for AI-generated likenesses comes down to one discipline: prevent consumers from drawing the wrong conclusion about identity, authenticity, or persuasion. In 2025, the safest approach is to label synthetic spokespeople plainly, disclose sponsorships where endorsements appear, and keep disclosures prominent across devices. Pair transparency with strong claim substantiation and documentation. When realism rises, disclosure must rise with it.
