AI Disclosure in Advertising: An Execution-Ready Checklist Based on the IAB Transparency Framework
The IAB introduced an AI transparency and disclosure framework meant to give advertising stakeholders a starting point for how to disclose AI use, while acknowledging that practical implementation questions remain. For marketers, the most reliable way to apply it is to translate the framework into internal thresholds for when AI involvement triggers disclosure, plus a repeatable workflow for placement and QA. Operationally, teams can reduce risk by making disclosures resilient across formats that crop or truncate and by maintaining lightweight documentation (tools used, high-level inputs, approvals, asset IDs). Treat disclosure as part of creative execution: validate in-market and feed learnings back into governance and QA.

Turn AI disclosure from a policy into repeatable execution: decision rules, resilient placement, and audit-ready records.
Key takeaways
- Use the IAB framework as a starting point, then convert it into internal rules, thresholds, and a decision tree your team can apply consistently.
- Design disclosures to survive real placement conditions, including cropping, truncation, and format-specific UI changes.
- Keep minimum viable documentation (tools used, high-level inputs, approvals, asset IDs) linked to each creative variant for audit readiness.
- Validate in-market by treating disclosure as a creative variable and reviewing outcomes beyond clicks, then update your decision tree and QA steps.
What the IAB AI transparency and disclosure framework is, and what it does (and does not) answer
An industry disclosure framework for AI use in advertising exists to create a shared baseline for transparency expectations across stakeholders. In practice, that means giving teams a common reference point for how to talk about AI involvement and how to think about disclosure when AI plays a role in what audiences see or hear.
What this gives marketers today is a starting point: a structured way to approach AI disclosure in advertising so teams are not improvising policy from scratch for every campaign. It can also help align internal teams and external partners on what “good” disclosure looks like at a high level.
What remains unclear or variable is how to implement disclosure consistently across channels and workflows. Interpretation can differ by team, format, and production setup, and the practical questions often show up at execution time: what qualifies as AI involvement, when it triggers disclosure, where the disclosure should appear, and how to prove you did it if asked later.
To manage that variability, treat the framework as a foundation and add the missing operational layer: your internal decision tree, placement rules by channel, and a QA and documentation routine that is repeatable under deadline.
Build an internal decision tree: when does AI use trigger disclosure?

A lightweight decision tree helps teams apply the same disclosure threshold under deadline.
Start by defining what “AI involvement” means for your organization, specifically in creative and production workflows. This definition should be practical, not philosophical. The goal is to ensure everyone involved in production can answer the same question the same way: did AI materially contribute to the audience-facing creative, and if so, how?
To make that usable, write a decision tree that breaks AI involvement into categories your team can recognize while producing assets. Example categories to consider include:
- Synthetic people or voices: anything that presents a person, face, or voice that is not a direct capture of a real human performance for that asset.
- Materially AI-generated content: AI that meaningfully shapes what the audience sees or hears (not just behind-the-scenes assistance).
- Minor edits: small touch-ups or adjustments where AI is used as an editing aid rather than to create the main content.
Once you have categories, set internal thresholds for which categories trigger disclosure. The point is not to chase perfection. The point is to ship consistently and reduce avoidable risk by avoiding ad hoc decisions made under time pressure.
Next, document exceptions and an escalation path. Exceptions happen in production, but they should be explicitly handled so the team is not guessing. Define:
- Who decides when a borderline case triggers disclosure.
- How the decision is recorded so it is retrievable later.
- What happens when teams disagree, such as a simple escalation step before launch.
Operational guidance: keep the decision tree short enough to use in a creative intake form or preflight checklist. If it cannot be applied quickly, it will be skipped when timelines compress.
Where to put disclosures: ad creative vs. landing page (and how to avoid placement failures)

Design disclosures to survive cropping and format variation, with a backup surface when needed.
Disclosure has multiple possible “surfaces,” and you should map them before you pick one. Common surfaces include:
- In-ad text embedded in the creative.
- End cards for video or animated units.
- Captions or accompanying post text, where applicable.
- Landing page statements that provide disclosure when users click through.
In practice, relying on a single surface can fail because placements behave differently. Cropping, truncation, and UI variation across formats can remove or obscure the disclosure even when it was present in the original creative file. That can create a situation where a team believes it disclosed, but the audience never actually sees it.
To reduce this risk, decide in advance what your hierarchy is. For example, you might prefer an in-ad disclosure when feasible, supported by a landing page statement for redundancy. The right answer depends on your channel realities and creative formats, but the execution requirement is the same: the disclosure must remain visible and understandable where it is placed.
Use a preflight QA checklist to confirm disclosure visibility and clarity across channels before launch:
- Visibility check: confirm the disclosure is not cropped, covered, or pushed below a “more” truncation in the placements you plan to run.
- Legibility check: confirm it is readable at expected on-screen size and on common devices.
- Clarity check: confirm the language is understandable without needing additional context.
- Variant check: confirm each creative version and size has the disclosure, not just the primary master file.
- Placement check: confirm the disclosure survives any platform rendering differences between preview tools and live delivery.
Operational guidance: treat disclosure as part of the creative layout system, not last-minute copy. Build it into templates and production specs so it is less likely to be dropped during resizing or localization.
Operationalize without slowing down: documentation and approvals that scale
Disclosure decisions are hard to defend later if there is no trail. The goal is not heavy bureaucracy. It is minimum viable documentation that scales with production volume.
At a minimum, capture:
- AI tools used for the asset or variant.
- Prompts or inputs at a high level, described in a way your team is comfortable retaining.
- Approvals: who signed off and when.
- Final asset IDs: the exact files or creative identifiers that shipped.
Version control matters because disclosure can be correct for one variant and missing for another. Link disclosures to specific creative IDs and variants, so you can answer questions like “Which version ran?” and “Which disclosure was attached to it?” without reconstructing a timeline from memory.
This documentation supports audit readiness for internal review and for external questions later, whether those questions come from a platform process or public scrutiny. When the team can produce a simple record of what tools were used, what was approved, and what assets ran, it reduces panic-driven investigations and helps keep future launches moving.
Operational guidance: embed these fields into whatever your team already uses to ship work (creative brief, asset tracker, or launch checklist). The best process is the one people will actually follow under deadline.
Measurement and validation: treat disclosure as a creative variable
Disclosure is not only a compliance task. It is also part of the creative experience, which means it can influence outcomes. When appropriate and permissible for your team, establish a baseline approach to compare disclosed versus non-disclosed variants. The intent is learning, not avoidance. Your governance should still determine when disclosure is required.
Monitor beyond clicks. Where your measurement stack allows, look for engagement quality and trust-related signals that your team considers meaningful. Keep the measurement plan aligned to your goal: understanding whether your disclosure approach is clear, resilient, and not causing unintended confusion in the user journey.
Finally, run a post-launch review loop. After the campaign:
- Confirm which assets actually delivered and whether disclosures displayed as expected in real placements.
- Collect issues found in QA or live delivery (cropping, truncation, missing variants).
- Feed the learnings back into the decision tree, templates, and the preflight checklist.
Operational guidance: if disclosures repeatedly fail in specific formats, treat it as a production constraint. Adjust templates, safe zones, or default surfaces (ad versus landing page) so the system becomes more reliable over time.
Sources
Frequently asked questions
What is the IAB AI Transparency and Disclosure Framework?
It is an IAB framework intended to give advertising stakeholders a starting point for transparency and disclosure expectations related to AI use in advertising, while leaving open practical questions about how teams implement it day to day.
When should marketers disclose AI use in advertising creative?
Use an internal decision tree based on your definition of AI involvement and clear thresholds for when AI contribution triggers disclosure. Many teams separate higher-risk categories (such as synthetic people or voices, or materially AI-generated content) from minor edits, and document exceptions and escalation paths so decisions are consistent.
Where should an AI disclosure appear: in the ad or on the landing page?
Map your disclosure surfaces first, including in-ad text, end cards, captions, and landing page statements, then choose placements that remain visible and understandable in real delivery conditions. Because cropping and truncation can remove disclosures, teams often plan placement with redundancy and validate visibility per format before launch.
How can teams QA AI disclosures across placements that crop or truncate creative?
Use a preflight checklist that tests each planned placement and creative variant for visibility, legibility, and clarity. Specifically check for cropping, truncation behind “more” UI, differences between preview and live rendering, and whether every size and variant includes the disclosure.