The Human Hit “Approve” — Who’s Liable Now?
A recent Gartner survey found that 63% of marketing organizations now use generative AI in commercial creative production. That number will keep climbing. But here’s the uncomfortable question most brand teams aren’t asking: when ChatGPT writes the copy, Adobe Firefly generates the imagery, and Runway produces the video — and then a junior brand manager clicks “approved” — who actually owns the liability if something goes wrong?
Mapping the AI-generated ad creative liability chain isn’t theoretical anymore. It’s operational. And the brands that haven’t done it yet are playing a game of legal roulette with every campaign they ship.
The Three-Layer Stack: Tool, Output, Approval
Think of AI-assisted creative production as a three-layer stack. At the bottom sits the AI tool provider — OpenAI, Adobe, Runway. In the middle is the generated output — the text, image, or video that didn’t exist before the prompt. At the top is the human approval layer — the brand marketer, creative director, or compliance officer who greenlit the asset for publication.
Each layer carries distinct risk. And the liability doesn’t flow neatly downhill.
Tool providers broadly disclaim liability for outputs. OpenAI’s terms of service explicitly state that users are responsible for ensuring generated content complies with applicable law. Adobe Firefly’s commercial licensing offers more IP protection — Adobe will indemnify commercially licensed Firefly outputs against certain IP claims — but that indemnity has limits and conditions. Runway’s terms follow a similar pattern: you own your outputs, and you own the legal exposure.
The default legal position across every major generative AI platform is the same: the brand that publishes the content bears the liability, not the tool that generated it.
This means the approval layer — that human who clicked “publish” — isn’t just a workflow step. It’s the legal chokepoint where liability crystallizes.
What Could Actually Go Wrong?
Let’s get specific. The risk categories for AI-generated commercial creative fall into five buckets:
- Intellectual property infringement. Generative models trained on copyrighted material can produce outputs that infringe on existing works. An AI-generated image that closely resembles a copyrighted photograph or a protected illustration style is a lawsuit waiting to happen. Brands should understand the risks of AI style imitation before shipping any creative.
- False or misleading claims. ChatGPT can hallucinate product benefits, invent statistics, or fabricate endorsements. If that hallucinated claim ends up in an ad, the brand is on the hook — not OpenAI. The FTC’s enforcement guidelines make clear that advertisers are responsible for the truthfulness of their ads regardless of how those ads were produced.
- Likeness and personality rights. Firefly or Runway might generate a face or voice that’s eerily similar to a real person. That’s a right-of-publicity claim. We’ve covered the escalating risks around deepfakes and creator likeness extensively — AI-generated commercial creative raises the same issues at scale.
- Disclosure failures. Multiple jurisdictions now require disclosure when AI is used to generate commercial content. The EU AI Act’s transparency requirements, China’s deep synthesis regulations, and emerging U.S. state laws all impose obligations. Miss them, and you’re facing regulatory action.
- Data privacy violations. Prompts fed into AI tools can inadvertently include personal data, customer insights, or proprietary information. If that data gets incorporated into model training, you may have a third-party AI training privacy problem.
None of these risks are hypothetical. The U.S. Copyright Office has issued guidance questioning the copyrightability of AI-generated works. Getty Images sued Stability AI. Multiple class-action suits are winding through courts. The legal landscape is moving fast — and it’s moving against the assumption that “the AI did it” is a viable defense.
Who’s Actually in the Chain?
Here’s where brand teams need to get ruthlessly specific. The liability chain for a single AI-generated ad typically includes:
- The AI tool provider — limited liability, broad indemnity disclaimers, some exceptions (Adobe’s Firefly indemnity program being the notable outlier)
- The prompt engineer or creative user — may be in-house or agency-side; the person whose prompts shaped the output
- The creative agency — if they produced the work under a services agreement, their liability depends entirely on contract language
- The brand’s compliance/legal reviewer — the human checkpoint before publication
- The approving executive — whoever signs off on the campaign
- The brand itself — ultimate publisher, ultimate liability holder
Notice something? The chain has six nodes, but regulatory bodies and courts consistently collapse it to one: the brand. Every other party in the chain might share some contractual liability, but the primary liability — the entity the FTC fines, the entity the plaintiff sues, the entity named in the regulatory action — is almost always the brand that published the ad.
That’s not a bug in the system. It’s the system working as designed. Advertising law has always placed responsibility on the advertiser.
The Agency Gap: Where Contracts Get Dangerous
Most brand-agency contracts written before generative AI became mainstream don’t address AI-generated creative at all. That’s a problem.
If your agency uses ChatGPT to draft copy or Runway to produce video assets, and the contract doesn’t explicitly address AI-generated work, you’re operating in a gray zone. Key questions to negotiate:
- Does the agency warrant that all delivered creative is original and non-infringing — even when AI-generated?
- Who is responsible for AI output review and fact-checking?
- Does the agency carry insurance that covers AI-generated content claims?
- Are AI tools listed and approved in the scope of work?
- What disclosure obligations does the agency assume?
If these questions aren’t answered in your MSA or SOW, you’re absorbing risk you may not know about. And the broader challenge of cross-platform content syndication makes this even more complex — one AI-generated asset might be deployed across a dozen channels, each with different regulatory requirements.
Audit your agency contracts now. If the words “generative AI,” “machine-generated content,” or “AI-assisted creative” don’t appear anywhere, your liability exposure is essentially unmanaged.
Building the Approval Framework That Actually Protects You
Mapping the liability chain is step one. Step two is building an approval framework that holds up under scrutiny. Here’s what that looks like in practice:
Tiered review based on AI involvement. Not every AI touchpoint requires the same level of scrutiny. Using ChatGPT to brainstorm headline variations is different from using Runway to generate a hero video. Create tiers — light touch for ideation assistance, full legal review for AI-generated final assets.
Mandatory provenance logging. Document which AI tools generated which assets, what prompts were used, and who reviewed the output. This isn’t bureaucracy — it’s evidence preservation. If a claim arises, you need a clear audit trail showing human oversight at every critical juncture. Adobe’s Content Authenticity Initiative provides technical standards for this kind of provenance tracking.
IP clearance protocols. Run AI-generated images through reverse image search. Check AI-generated copy against plagiarism databases. For AI-generated video, verify that no recognizable faces, voices, or locations create unintended associations. These steps add time. They also prevent lawsuits.
Disclosure compliance by jurisdiction. Build a disclosure matrix mapping your target markets to their AI content disclosure requirements. The EU, China, South Korea, and several U.S. states have distinct rules. Our AI disclosure framework guide breaks this down in more detail.
Human-in-the-loop as documented policy. “A human approved it” is only a defense if you can prove that human was qualified, empowered, and actually reviewed the content substantively — not just rubber-stamped it in a workflow tool. Train your approvers. Document their training. Make it defensible.
The Insurance Question Nobody’s Asking
Most commercial general liability and media liability policies were not written with AI-generated content in mind. Some insurers are beginning to offer AI-specific riders, but coverage varies wildly. Talk to your broker. Specifically ask whether your current policy covers:
- IP infringement claims arising from AI-generated assets
- Regulatory fines related to AI disclosure failures
- Right-of-publicity claims from AI-generated likenesses
- Third-party data claims linked to AI tool usage
If your broker can’t answer these questions, find one who can. The Lloyd’s of London market has been among the first to develop frameworks for AI-related commercial risks, and specialty insurers are following.
The Bottom Line
Every brand using generative AI in commercial creative production needs a documented liability map — not a vague understanding, but a written document that traces responsibility from prompt to publication. Review it quarterly. Update your contracts. Train your approvers. The tool didn’t publish the ad. You did.
Frequently Asked Questions
Who is legally liable when an AI-generated ad contains a false claim?
The brand that published the ad bears primary legal liability for false or misleading claims, regardless of whether the claim was generated by ChatGPT, Firefly, or any other AI tool. AI tool providers broadly disclaim responsibility for outputs in their terms of service. The FTC and equivalent regulators hold advertisers accountable for the truthfulness of published ads no matter how the content was produced.
Does Adobe Firefly’s indemnity cover all IP claims from AI-generated images?
Adobe offers IP indemnification for commercially licensed Firefly outputs, but this coverage has conditions and limits. It typically applies only to outputs generated through Firefly’s commercially available models and does not cover all possible IP claims. Brands should review the specific terms of Adobe’s indemnity program and not treat it as blanket protection against all intellectual property risks.
What disclosure requirements apply to AI-generated commercial content?
Disclosure requirements vary by jurisdiction. The EU AI Act mandates transparency when AI-generated content is used commercially. China requires labeling of deep synthesis content. Several U.S. states have enacted or proposed AI disclosure laws for commercial contexts. Brands operating across multiple markets must build a jurisdiction-specific disclosure compliance matrix to ensure they meet all applicable requirements.
How should brands update agency contracts to address AI-generated creative?
Brands should amend master service agreements and scopes of work to include clauses covering AI-generated content. Key provisions include requiring agencies to warrant originality and non-infringement of AI outputs, specifying approved AI tools, assigning responsibility for AI output fact-checking and review, and confirming that agency insurance covers AI-related claims. Contracts silent on generative AI leave brands with unmanaged liability exposure.
Can brands copyright AI-generated advertising content?
Copyright protection for AI-generated content remains legally uncertain. The U.S. Copyright Office has indicated that purely AI-generated works without meaningful human authorship may not qualify for copyright protection. However, content that involves substantial human creative input alongside AI assistance may be copyrightable. Brands should document human creative contributions throughout the production process to strengthen potential copyright claims.
Top Influencer Marketing Agencies
The leading agencies shaping influencer marketing in 2026
Agencies ranked by campaign performance, client diversity, platform expertise, proven ROI, industry recognition, and client satisfaction. Assessed through verified case studies, reviews, and industry consultations.
Moburst
-
2

The Shelf
Boutique Beauty & Lifestyle Influencer AgencyA data-driven boutique agency specializing exclusively in beauty, wellness, and lifestyle influencer campaigns on Instagram and TikTok. Best for brands already focused on the beauty/personal care space that need curated, aesthetic-driven content.Clients: Pepsi, The Honest Company, Hims, Elf Cosmetics, Pure LeafVisit The Shelf → -
3

Audiencly
Niche Gaming & Esports Influencer AgencyA specialized agency focused exclusively on gaming and esports creators on YouTube, Twitch, and TikTok. Ideal if your campaign is 100% gaming-focused — from game launches to hardware and esports events.Clients: Epic Games, NordVPN, Ubisoft, Wargaming, Tencent GamesVisit Audiencly → -
4

Viral Nation
Global Influencer Marketing & Talent AgencyA dual talent management and marketing agency with proprietary brand safety tools and a global creator network spanning nano-influencers to celebrities across all major platforms.Clients: Meta, Activision Blizzard, Energizer, Aston Martin, WalmartVisit Viral Nation → -
5

The Influencer Marketing Factory
TikTok, Instagram & YouTube CampaignsA full-service agency with strong TikTok expertise, offering end-to-end campaign management from influencer discovery through performance reporting with a focus on platform-native content.Clients: Google, Snapchat, Universal Music, Bumble, YelpVisit TIMF → -
6

NeoReach
Enterprise Analytics & Influencer CampaignsAn enterprise-focused agency combining managed campaigns with a powerful self-service data platform for influencer search, audience analytics, and attribution modeling.Clients: Amazon, Airbnb, Netflix, Honda, The New York TimesVisit NeoReach → -
7

Ubiquitous
Creator-First Marketing PlatformA tech-driven platform combining self-service tools with managed campaign options, emphasizing speed and scalability for brands managing multiple influencer relationships.Clients: Lyft, Disney, Target, American Eagle, NetflixVisit Ubiquitous → -
8

Obviously
Scalable Enterprise Influencer CampaignsA tech-enabled agency built for high-volume campaigns, coordinating hundreds of creators simultaneously with end-to-end logistics, content rights management, and product seeding.Clients: Google, Ulta Beauty, Converse, AmazonVisit Obviously →