There are two compliance-themed agents in the fleet. They cover different surfaces and you'll see them in different places.
- •Compliance — content/post compliance: FTC substantiation, music licensing, TikTok platform policy. Runs as a step inside Profile Audit and Campaign Launch. Detailed below.
- •LDR Compliance — TikTok Shop operational compliance: the Late Dispatch Rate enforcement that restarted on 2026-04-06. Runs against Shop accounts on a cron and emits alerts when LDR trips warning or critical thresholds. See LDR Compliance — TikTok Shop late-dispatch enforcement below.
What the Compliance agent does
The Compliance agent (Sonnet 4.6; spec at ~/projects/tiktok-army/tiktok_army/agents/_catalog.py:219) reads recent posts and captions for an account and audits them against three buckets:
- FTC substantiation — claims that need evidence under US Federal Trade Commission rules.
- Music licensing — sounds used in posts and whether the account's tier is licensed for them.
- TikTok policy — platform rules (eligible categories, prohibited content, ad-policy compliance).
It returns a list of findings and a single overall_status of pass, pass_with_warnings, or blocked. The strictness option controls how aggressively soft issues get flagged: lenient means "blockers only", standard is the default, strict flags anything dubious.
Compliance runs in two seeded workflows: Profile Audit (where it's informational — read-only) and Campaign Launch (where its findings can stop a publish from happening).
The three claim categories
Findings come tagged with a category field. The three you'll see most:
`claim_substantiation` (FTC)
Triggered when a caption or on-screen text makes a factual claim about the product that needs evidence to back it up. The FTC's bar: any objective performance claim ("clinically proven", "24-hour hydration", "reduces wrinkles in 7 days") needs reasonable substantiation in advance of making the claim, and the brand has to be able to produce it on request.
What the agent flags:
- •Time-bound performance claims: "24h hydration", "instant results", "overnight transformation."
- •Quantified claims: "97% saw improvement", "doubles your hair growth."
- •Comparison claims: "better than [competitor]", "the only X that does Y."
- •Health/safety claims: "non-toxic", "hypoallergenic", "dermatologist-tested."
- •Income claims: relevant for affiliate/business-opportunity posts.
What the agent does NOT flag (these are typically fine):
- •Subjective claims: "I love this", "feels amazing."
- •Process descriptions: "this is how I use it."
- •Personal experience narratives clearly framed as one user's experience.
`music_licensing`
Triggered when a sound used in a post may not be licensed for the account's tier. TikTok has two relevant pools:
- •Commercial Music Library — vetted for business use. Safe for
business-tier accounts (which is what most brands operate under, including Shop accounts). - •General sounds (consumer) — popular pop tracks, etc. Licensed for personal use by individual creators but not for business accounts. Using these on a business account risks takedown and can affect Shop standing.
The agent identifies which pool each sound belongs to. Findings here are typically info severity if the sound is in the commercial library, and warning or higher if it's a general sound on a business account.
`tiktok_policy`
Triggered when the post hits any other platform rule. Examples:
- •Restricted categories without proper disclosure (alcohol, supplements, financial advice).
- •Branded content not properly tagged.
- •Affiliate posts missing
#ador#sponsoreddisclosure. - •Misuse of trending challenges that have safety advisories.
This is the catch-all and the one where strictness setting matters most — strict will flag anything ambiguous, lenient only what's clearly violating.
Severity levels on findings
Each finding has a severity and a blocked flag. Four severities, two effects.
| Severity | What it means | `blocked` typical |
|---|---|---|
info | Informational, no action needed | false |
warning | Real issue but not workflow-stopping | false |
error | Significant issue, recommend fixing | sometimes true |
critical | Stop everything | true |
The blocked flag is the practical one — it directly determines what happens to the workflow run.
"Blocked" vs "warning" — what each does
This is where the difference shows up in practice.
`pass_with_warnings`
The most common non-clean result. Has warning-severity findings but nothing with blocked: true.
What happens to the workflow.
- •Profile Audit: warnings appear in the synthesis report under a "Compliance flags" section. Run completes normally.
- •Campaign Launch: warnings appear on the relevant approval gate's Review card so you can decide whether to address them before publishing. The workflow doesn't stop on its own.
What you should do. Decide per-finding. A claim_substantiation warning for "24h hydration" might mean "have the lab data ready" (acceptable) or "the claim is false, change the caption" (must fix). Click through to the per-step trace to see the agent's reasoning.
`blocked`
Triggered when at least one finding has blocked: true. These are policy or legal issues that the agent has decided shouldn't ship as-is.
What happens to the workflow.
- •Profile Audit: still informational — the audit completes and the synthesis report leads with the blockers. Nothing was going to publish anyway.
- •Campaign Launch: the
blockedstatus gates the publish step. The Content Producer agent checks the upstream Compliance output before it letstiktok_publisher.publishrun. Ablockedfinding fails the publish step with a clear error message.
What you should do. Read the finding's summary field to see exactly what's wrong, fix it (typically by changing the caption or swapping the sound), and re-run the workflow.
`pass`
No findings worth surfacing. Everything's clean. Workflow proceeds normally with no compliance section in the synthesis report (or just a one-line "Compliance: pass").
Strictness — how to choose
The Compliance agent has a strictness option with three values:
- •
lenient— only flags definite blockers (clearly false claims, music-licensing violations on business accounts, prohibited categories). Use this when you've already had legal review on a campaign and just want a final platform check. - •
standard(default) — flags blockers plus serious warnings (substantiation issues, ambiguous policy fits). Right default for most workflows. - •
strict— flags blockers, warnings, and soft issues (anything an aggressive reviewer might note). Use this for first-time launches, regulated categories (supplements, financial), or when the brand is risk-averse.
Set strictness on the workflow's per-step options. Per the seeded campaign_launch workflow definition (~/projects/tiktok-army/tiktok_army/orchestrator/definitions.py:165), Compliance runs at default strictness. To override, edit the workflow or supply a per-run option.
Real-world reading order
When a workflow surfaces a Compliance finding, read it in this order:
overall_status— pass / pass_with_warnings / blocked. This tells you the urgency.- The list of findings with
blocked: true. These are non-negotiable; fix them or the run won't publish. error-severity findings. Fix or consciously accept.warning-severity findings. Read eachsummary. Decide per-item.info-severity. Skim. Useful for context, rarely actionable.
Things to know
- •The Compliance agent flags issues — it doesn't fix them. Fixing is on you (or the Listing Optimizer for caption changes; or you re-run with revised input).
- •Music licensing flags are based on TikTok's Commercial Music Library taxonomy. The agent looks up the sound's pool by ID. If TikTok moves a sound between pools, a previously-clean post can become flagged on the next audit.
- •The agent reads only the captions and sounds it can see through the API. It doesn't audit the video content itself for visual policy issues (no nudity / weapons / etc.) — that's TikTok's own ingestion-time check, which happens after publish.
- •For regulated industries (alcohol, supplements, finance, gambling)
strictis strongly recommended on Campaign Launch. The cost of a takedown is much larger than the cost of an over-cautious flag. - •Compliance runs read-only. It never changes a post or caption. A flag is a recommendation; you act on it via Listing Optimizer (for captions) or direct edits.
LDR Compliance — TikTok Shop late-dispatch enforcement
The Compliance agent above looks at content. LDR Compliance (ldr_compliance.py) looks at operations — specifically the Late Dispatch Rate that TikTok Shop restarted enforcing on 2026-04-06.
Why it exists
LDR = orders dispatched more than two business days after entering "Awaiting Shipment", divided by total orders, on a rolling 30-day window. Sellers above 10% face traffic throttling, product de-listings, and Shop Performance Score penalties (which feed into AHR in July 2026). The agent is the seller-facing early warning so you can ship at-risk orders before they breach.
What it does on every run
- Pulls the account's current LDR % from the Shop provider (rolling 30 days).
- Identifies at-risk orders — orders not yet shipped that are within
at_risk_hours_to_breach(default 12) of the 2-business-day SLA — so the operator can ship them before they breach. - Raises alerts via
tiktok_alertswhen LDR trips a threshold, deduped against open alerts of the same category so you don't get re-paged for a still-firing condition.
Thresholds you can tune
- •
ldr_warning_pct(default0.07= 7%) — soft alert. Heads-up that you're trending toward TikTok's enforcement line. - •
ldr_critical_pct(default0.10= 10%) — TikTok's enforcement line itself. Hitting this triggers the platform-side consequences listed above; the agent emits a critical alert. - •
at_risk_hours_to_breach(default12) — orders within this many hours of breaching the 2-business-day SLA show up in the at-risk list. Drop this lower (e.g.6) for very tight cutover; raise it for more buffer.
Output shape
Each run returns checked_count, alerts_raised, alerts_deduped, and a per_account array. Each per_account entry carries the account's ldr_pct, the list of at_risk_orders with hours-to-breach, the computed tier (ok / warning / critical), and which alert categories were raised.
Where it shows up
LDR Compliance runs as the lead step in the TikTok Shop Audit seeded workflow (alongside Account Health, Shadowban Sentinel, and the content Compliance agent). Pick "TikTok Shop Audit" as the brief outcome to run it on-demand. It can also run on a cron against active Shop accounts and emits alerts to the alerts feed; you'll see those in the inbox/notifications surface and on the account-detail page when an account trips a threshold.
Things to know
- •LDR Compliance is a data-monitoring agent — no Claude calls, no Studio, no LLM cost. It's cheap to run frequently.
- •It uses the TikTok Shop provider (
providers/tiktok_shop.py); inTIKTOK_PROVIDER_MODE=mockit returns synthetic LDR percentages and at-risk order lists so you can validate the alerting end-to-end without real Shop credentials. - •It only fires for accounts you operate — it has no useful read on third-party Shop accounts because the Shop provider needs seller-side auth.
- •Alert dedupe is by category against open alerts on the same account. Acknowledging or resolving the alert in the inbox lets the agent re-raise it on the next breach.