Understanding FTC Disclosure Requirements For AI-Generated Likenesses matters more in 2025 than ever, as synthetic voices, faces, and avatars spread across ads, social posts, and customer support. Brands want speed and scale, while audiences expect honesty about what’s real and what’s simulated. If your campaign uses a digital double, you need clear, timely disclosures—or risk enforcement, reputational harm, and platform penalties. Here’s what to do next.
FTC disclosure requirements for AI-generated likenesses: what the rules target
The Federal Trade Commission’s core focus is deceptiveness: marketing that misleads people in a way that affects their decisions. AI-generated likenesses raise that risk because they can create a false impression that a real person said, did, or endorsed something. In practice, the FTC looks at the net impression of the content—what the typical viewer walks away believing—not just what a fine-print line says.
While the FTC’s standards apply broadly to advertising and endorsements, AI-generated likenesses commonly trigger issues in three situations:
- Implied endorsement: A synthetic version of a celebrity, influencer, doctor, or “customer” appears to recommend a product.
- Fabricated testimonials: AI-generated videos or audio that present a person as a genuine buyer or patient when they are not.
- False identity signals: Content that looks like a real employee, founder, or expert speaking when it is actually an AI avatar or reconstructed voice.
A practical takeaway: if your audience could reasonably believe a real person is speaking or endorsing, your disclosure must prevent that misunderstanding at the moment it matters—before the audience acts.
AI-generated likeness disclosure: when you must disclose and what counts as “material”
Disclosures matter when they address a material fact—information that would likely affect a consumer’s choice or conduct. With AI-generated likenesses, “material” often includes whether a person is real, whether they actually participated, and whether an endorsement is genuine.
Use disclosure when any of the following are true:
- The likeness suggests real participation: The video implies the person actually spoke the script, attended an event, or used the product.
- The identity is central to trust: The speaker is framed as a professional (doctor, financial expert), a satisfied customer, or a recognizable public figure.
- The claim relies on credibility: Health, finance, safety, performance, or “before and after” outcomes are presented through a human story or testimonial.
- The format increases believability: Realistic voice cloning, photorealistic face synthesis, or “talking head” ads are more likely to require explicit clarity.
What if you use AI only for editing—like noise reduction, background cleanup, or translation? If the person truly participated and the edits don’t change the meaning, you may not need a special AI disclosure. But if you alter what the person appears to say, imply a new endorsement, or create new footage they never performed, disclosure becomes essential and you should also evaluate whether the execution is deceptive even with a disclosure.
Follow-up question readers often ask: Is disclosure always enough? No. If the overall content is misleading—such as inventing a medical testimonial—an “AI-generated” label does not cure a deceptive claim. The FTC can still treat it as an unfair or deceptive practice.
Clear and conspicuous disclosures: placement, wording, and format that pass scrutiny
The FTC expects disclosures to be clear and conspicuous. For AI-generated likenesses, that means consumers should notice, understand, and remember the disclosure before they’re influenced by the synthetic performance. Disclosures that appear after a purchase click, at the end of a long video, or buried in a profile bio are risky because they don’t correct the first impression.
Placement rules of thumb:
- Put it where the claim is: On-screen near the likeness for video; adjacent to the image for static ads; in the same post text for social media.
- Repeat for long content: If the segment runs longer than a brief clip, restate at natural breaks and keep a persistent on-screen label when the synthetic person is visible.
- Make it unavoidable: Don’t rely on hover states, tiny footnotes, “more” truncation, or a link to a separate disclosure page.
Wording that works should be plain and specific. Avoid jargon like “synthetic media” if the audience won’t understand it. Consider these examples:
- For ads featuring a digital double: “This is an AI-generated depiction. The person shown did not record this message.”
- For an authorized avatar of a real spokesperson: “AI-generated video of [Name], created with permission.”
- For voice cloning in a spot: “AI-generated voice. Script approved by [Name].” (Use only if true.)
- For fictional or composite ‘customers’: “AI-generated actor portrayal. Not a real customer.”
Format considerations:
- Visual disclosures: Use legible font size, strong contrast, and enough screen time to read. Keep it on-screen during the key endorsement moments.
- Audio disclosures: Include a spoken disclosure when the primary channel is audio (podcasts, radio-style ads, voice assistants).
- Mobile-first design: Assume most viewers see content on phones. Test that the disclosure remains readable without zooming.
Follow-up question: Can we disclose only in the caption? If viewers can consume the content without seeing the caption (common on many platforms), treat caption-only disclosure as insufficient. Use on-media disclosure for video and image creatives.
Endorsements and testimonials guidance: influencers, celebrities, and synthetic spokespeople
AI-generated likenesses collide with endorsement principles quickly. The FTC’s endorsement approach generally expects that endorsements reflect honest opinions and experiences, and that material connections (payments, free products, affiliate relationships) are disclosed. When you add AI, you also need to disclose the nature of the likeness if it could mislead.
If you use a real influencer’s AI avatar:
- Get written permission and define allowed scripts, platforms, and duration.
- Disclose both the material relationship (e.g., “Paid partnership”) and that the content is AI-generated or AI-assisted if the avatar/voice is simulated.
- Confirm substantiation for any claims the avatar makes. If the influencer never used the product, don’t script “I use this every day.”
If you use a celebrity lookalike or soundalike without authorization, you face a cluster of risks: deception, false endorsement theories, state right-of-publicity claims, and platform enforcement. Even with disclosure, this approach is often combustible because the creative intent is to borrow credibility. A safer path is to use original talent, clearly fictional characters, or properly licensed likenesses with transparent labeling.
If you generate “expert” spokespeople (e.g., “Dr. Smith”): you must avoid implying credentials that don’t exist. If the person is fictional or AI-generated, say so clearly and avoid medical or financial advice formats that consumers interpret as professional guidance. When claims touch regulated areas (health, weight loss, investing), prioritize qualified human review, strong substantiation, and conservative scripting.
Follow-up question: Do we need a disclosure if the avatar is obviously animated? Not always, but “obvious” is context-dependent. A stylized cartoon in a game-like ad may not mislead; a realistic avatar in a news-style frame can. When the format mimics real footage, disclose.
AI advertising compliance program: documentation, approvals, and vendor controls
EEAT-aligned compliance is not just about labeling; it’s about repeatable process and evidence. If the FTC or a platform asks questions, you should be able to show how you avoided deception and verified claims.
Build a practical workflow:
- Inventory and classify use cases: Identify where AI-generated likenesses appear (paid ads, organic social, customer support, landing pages, livestreams).
- Pre-flight checklist: Confirm permissions, script accuracy, claim substantiation, and disclosure placement for each asset.
- Human review: Require legal/compliance sign-off for realistic human likenesses, health/finance claims, and any “testimonial” framing.
- Version control: Archive scripts, prompts, edit logs, and final exports so you can prove what was published and when.
- Training: Teach marketing and creators what “clear and conspicuous” means on the platforms they use every day.
Vendor and tool controls reduce surprises:
- Contract terms: Require vendors to warrant they have rights to training data and to provide provenance or consent records for licensed likenesses.
- Prohibited uses: Ban unlicensed celebrity likenesses, fabricated customer testimonials, and false “expert” credentials.
- Watermarks and labeling: Use consistent on-screen labels; consider internal watermarking to track where assets spread.
Monitoring and corrections:
- Spot-check live placements: Disclosures can break when ads are resized or reformatted.
- Fast takedown path: Establish who can pause campaigns if the disclosure fails or an asset is misinterpreted.
- Update disclosures as formats change: A disclosure that works in a feed may not work in a story or short-form video placement.
Follow-up question: What should we document? Keep consent/permission records, substantiation for objective claims, the disclosure language you used, and screenshots or exports proving placement. This documentation supports credibility and speeds up incident response.
Enforcement risk and how to reduce it: red flags the FTC and consumers notice
Enforcement risk rises when AI-generated likenesses are used to create false trust. Consumers also report these issues quickly, and platforms may restrict accounts or ads even before regulators act. Focus on reducing the behaviors that look like intentional deception.
Common red flags:
- “News-style” ads with anchors or “reporters” who are AI-generated and presented as real.
- Medical miracle claims delivered through emotional AI testimonials or doctored “before and after” stories.
- Impersonation of company personnel (CEO/founder/customer support) without clear labeling.
- Micro-disclosures that are technically present but practically invisible on mobile.
- Disclosure mismatch where the caption says “AI-generated” but the landing page and retargeting creative omit it.
Risk-reduction moves that work:
- Disclose early and repeatedly in the user journey, not just once.
- Use straightforward language and avoid euphemisms that downplay simulation.
- Keep claims conservative and evidence-backed; don’t rely on synthetic emotion to sell unverified outcomes.
- Design for the “glance test”: if someone watches three seconds with sound off, can they still understand it’s AI?
If you’re unsure, treat disclosure as a baseline and then ask a tougher question: Would a reasonable consumer feel tricked after learning this was AI? If yes, rebuild the creative rather than trying to disclose your way out of a misleading concept.
FAQs: FTC disclosure requirements for AI-generated likenesses
Do FTC rules apply to small businesses and creators using AI avatars?
Yes. The FTC’s deception standards apply regardless of company size. Smaller teams often face the same risks because short-form ads and influencer-style promotions can mislead quickly if a synthetic likeness implies a real endorsement.
What’s the difference between “AI-assisted” and “AI-generated” for disclosures?
“AI-assisted” typically means a real person performed and AI tools enhanced production (cleanup, color, minor edits). “AI-generated” suggests the likeness or performance itself is simulated. If the person did not actually record the message, use “AI-generated” (or equally clear language) so consumers don’t assume authenticity.
Can we use a disclaimer like “for entertainment purposes only” instead of disclosing AI?
Usually no. That phrasing is vague and may not correct the specific impression that a real person is speaking or endorsing. Use a direct disclosure about the AI-generated likeness, placed where viewers will see or hear it.
Where should the disclosure appear in a short video ad?
Place it on-screen near the likeness at the start or as soon as the likeness appears, keep it visible during the key endorsement moments, and ensure it’s readable on mobile. If the ad is primarily audio-forward, add a spoken disclosure too.
Do we need permission to create an AI-generated likeness of a real person?
Often yes, and it’s a best practice even when you believe an exception may apply. Beyond FTC deception concerns, many jurisdictions recognize rights of publicity and other claims. Obtain written consent and define how the likeness can be used.
What if we disclose properly but the ad still implies a fake result?
Disclosure doesn’t replace truthfulness. If the underlying claim is misleading or unsubstantiated—especially in health or financial contexts—you still face risk. Fix the claim, provide substantiation, or remove it.
How do we handle AI customer service agents that look like humans?
Tell users they are interacting with an AI agent at the start of the conversation and when switching from human to AI (or vice versa). If the agent uses a human-like avatar or voice, ensure the disclosure is visible and understandable before users rely on the interaction.
FTC compliance for AI-generated likenesses is achievable when you prioritize clarity over cleverness. Disclose AI simulation where it could change how people interpret an endorsement, place the notice where consumers will actually perceive it, and document permissions and claim support. Treat disclosure as part of a broader honesty system, not a last-minute label. Build that system now, and your marketing stays credible as AI realism accelerates.
