The agents output structured scores in three places that matter to operators: shadowban risk, engagement trend, and audience segment scores. All three look like simple labels or percentages but each is calibrated against something specific. Reading them wrong leads to bad decisions.
Shadowban risk
There are two surfaces that emit a shadowban signal. They look similar but answer different questions, and reading the wrong one will mislead you.
`shadowban_risk` — the Account Health canary
Where it appears. Output of the Account Health agent (field shadowban_risk). Surfaces in the synthesis report under "Account health" and on the run page next to the overall score. This runs on every Profile Audit and Post-Launch Loop, against your own active accounts.
The three levels.
- •
low— no signal of suppression. View velocity is steady or growing, comments are showing up on time, no evidence of rate limiting on the API side. - •
medium— at least one canary is firing but the others are clean. Examples: view velocity dropped >20% in 7 days but comments are healthy; or comments are slow but views are normal. One signal could be a content miss (a video that just didn't land); two or more starts looking like suppression. - •
high— multiple suppression signals at once. View velocity collapsed, comment surface is much smaller than expected, sometimes paired with policy flags. Athigh, do not run paid amplification — you'll be paying to push content TikTok is throttling.
What it's calibrated against. The Account Health agent compares the account's last 14 days against its own 30-day baseline (in tiktok_army/providers/_mock_data.py:account_health for mock mode, the same shape against live data in real mode). It is not benchmarked against the niche — a small account hitting 800 views per video with a stable trend reads low, the same way a million-follower account hitting 400k reads low. Suppression is detected by change, not absolute level.
When to trust it. Trust low and high reliably. Treat medium as "investigate before acting." If you see medium and you're about to spend money, click through to the run's per-step trace and read the shadowban_signals array — that's the actual reasoning.
When to override it. If the account just changed strategy (you started posting different content last week), a temporary medium can be a strategy lag, not a shadowban. Re-run the audit in 7 days; if it stays medium or escalates, treat it as real.
`risk_score` — the Shadowban Sentinel deep-dive
Where it appears. Output of the dedicated Shadowban Sentinel agent (shadowban_sentinel.py) — a 0–100 composite score with a per-signal breakdown. It runs as a step in the seeded Profile Audit and Post-Launch Loop workflows (next to Account Health), and is also the agent that powers the planned free /public-tools/shadowban page when run on-demand against a single handle.
Why it exists alongside Account Health. The Account Health canary is a categorical alert across many problems for accounts you operate. Shadowban Sentinel is narrower and deeper: it independently checks four signals — hashtag visibility (search-from-clean-account probe), reach collapse, engagement decay, and provider-flagged status — weights them, and returns one composite number plus the evidence per signal. It's the agent to run when the question is "is this specific handle being suppressed, and why do we think so?"
The four levels (composite risk_tier).
- •
low(<25) — no signal trips its threshold. - •
medium(25–49) — one signal is firing, usually engagement decay or a soft hashtag-visibility miss. - •
high(50–74) — two or more signals firing, including either reach collapse or hashtag invisibility. - •
critical(75+) — multiple strong signals plus typically a provider flag. Treat as confirmed suppression.
Signal weights. Calibrated as: hashtag visibility 35, reach collapse 30, engagement decay 20, provider flag 15. The breakdown is in tiktok_army/agents/shadowban_sentinel.py and will move as we collect real-data calibration samples.
Which one to look at when.
- •Profile Audit synthesis report — both surface side-by-side. The categorical canary tells you "is something off?"; the composite tells you "how bad and which signals?".
- •Investigating a specific suspected shadowban (yours or a third party's) → focus on Shadowban Sentinel's per-signal evidence; the canary's three labels won't have enough resolution.
- •Running a public-facing free check on an arbitrary handle → Shadowban Sentinel is the surface (Account Health needs accounts you operate).
Engagement trend
Where it appears. Output field engagement_trend on Account Health, and engagement_trend_pct_wow in the underlying provider data.
What the strings mean. The agent emits short tags like declining_3pct_wow, stable_+1pct_wow, growing_8pct_wow. The number is the change in average engagement rate week-over-week. The compositional engagement rate is (likes + comments + shares + saves) / views, computed across all the account's posts in the lookback window.
Why week-over-week and not month-over-month. TikTok cycles fast — a month-over-month comparison loses signal in noise. Week-over-week captures the trend you can act on this week.
How to interpret the percentages.
- •±2% w/w — noise. The account is essentially flat. Don't rebuild your strategy on a 1.8% bump.
- •±3–7% w/w — real movement. Worth investigating but not panicking. A 5% decline three weeks running is worse than a single 10% week.
- •>7% w/w in either direction — something has changed. Either content (you posted something different), audience (the FYP shifted you), or platform (algo update). Identify which before reacting.
The trap. Engagement rate goes up when views go down, because the smaller view base makes any given like rate look big. Always read engagement trend alongside avg_views_14d and avg_views_30d (also on Account Health output). A "+12% engagement" with views down 30% is a red flag, not a win.
Audience segment scores
Where it appears. Output of the Audience Mapper agent. Each segment has a score from 0.0 to 1.0 and a size_bucket of small, medium, or large.
What the score means. It's a composite of two things: how strongly the segment's signals (comment keywords, watch time, save rate) match the account's known niche, and how clean the cluster is (tight vs sprawling). It's not a forecast of conversion rate or a ranking against the rest of the universe — it's a confidence score in the segment itself.
Calibration anchors (rough heuristics).
- •0.80+ — high confidence segment. Tight signal cluster, clear comment keywords, distinguishable from other segments. Lean in.
- •0.60–0.79 — usable but soft. Worth targeting but expect noisier creative response. Test before scaling.
- •0.50–0.59 — borderline. The Audience Mapper drops segments below 0.50 by default (
min_scoreoption). Anything in this band is a "I noticed this but I'm not sure" segment. - •<0.50 — dropped from output. If you want to see them, lower
min_scoreon the agent's options.
Size buckets are independent of score. A small segment can have a 0.92 score (a tight, valuable niche). A large segment can have a 0.65 score (a broad cluster you can target but with weaker signal). Score tells you confidence; size tells you reach. Multiply them mentally to prioritize.
The subtle bit. Segment scores are calibrated relative to this account, not to TikTok at large. A 0.84-fit "Morning routine minimalists" segment for a beauty brand says this account's audience contains a coherent cluster that looks like morning routine minimalists — not this is a 0.84-strength segment on TikTok overall. Don't compare scores across two different accounts' audits as if they were on the same scale.
Trend fit scores
Where it appears. Output of Trend Watcher. Three places to watch:
- •
fit_score_for_brand— overall 0–100 fit of the trend basket to the account. - •Per-sound
fitflag —low/medium/high. - •Per-hashtag
lift_pct— week-over-week growth in usage.
Reading lift_pct. Lift is growth in use volume, not a fit signal. A +38% hashtag is being used 38% more this week than last; that says nothing about whether your audience cares about it. Pair lift with the per-hashtag category to decide whether it's adjacent to your niche.
Reading fit_score_for_brand. Above 70 means lean in this week. 50–70 means pick the top 1–2 trends and try them, don't pivot wholesale. Below 50 means the rising trends right now don't fit your brand — keep doing what works for you. See Trends and Sounds for more.
Compliance overall_status
Three values: pass, pass_with_warnings, blocked. See Compliance Warnings for what each one means and what it does to a workflow run.
A general rule
Every score in TikTok Army has a reasoning trail. From the dashboard run page, click any agent step to see the per-step trace — every Claude call, the system prompt it used, the user prompt it sent, the raw response. If a number looks wrong, the trace will tell you why.