← Knowledge

The Approval Flow

When approvals appear, what to look at before approving, what reject-with-feedback actually does, and what the future approver-agent will change.

When approvals appear

Approvals come from one source: workflow steps that are explicitly defined as approval gates in the workflow recipe. Today's seeded workflows have approvals only on outbound or paid actions:

  • Profile Audit — no approval gates. Read-only.
  • Campaign Launch — four approval gates (listings, publish, ad spend, outreach). See How to Launch a Campaign.
  • Post-Launch Loop — no approval gates by default. Triages comments and recommends next moves; doesn't act on them.

Custom workflows you build can put approvals anywhere. The pattern: insert an approval_gate node downstream of the agent whose output you want to review. The dashboard surfaces a Review card at that point and the workflow halts until you act.

Two things to know about the timing:

  1. Approvals don't time out. The workflow stays paused until you act. Costs continue to accrue only for steps that have already run; nothing further costs anything until you approve.
  2. Multiple approvals can appear at once. Campaign Launch has approval gates downstream of parallel steps, so you may see Listings, Ad Spend, and Outreach all waiting in your queue at the same time. They can be acted on in any order.

How approvals show up in the dashboard

Two places.

The Approvals page. A queue of every pending approval across every active run for your workspace. Newest first. Click one and you land on the Review card.

The run page. Inside the live DAG view of a single workflow run, an approval_gate node turns yellow and shows a "Review" button when it's pending. Clicking that button opens the same Review card.

What's on a Review card

The card has three parts.

The proposal. This is the upstream agent's output — the listing variants, the caption + hashtags, the ad campaign list, the DM drafts. Rendered in a readable format, not raw JSON. The exact field shown is determined by the workflow definition's target_output_field for that approval node (e.g. variants, post_caption, campaigns, drafts).

The reasoning trail. A collapsible section showing how the proposal was produced. For LLM-pure agents (Listing Optimizer, Creator Outreach, Compliance, etc.) this is the Claude prompt(s) and response(s). For Studio-consuming agents (Content Producer, Ad Campaign Director) it includes the Studio brief and the rendered asset preview where applicable.

The actions.

  • Approve — green button. Marks the step succeeded, releases the workflow to continue.
  • Reject with feedback — red button. Opens a small text field. Marks the step failed, terminates the run, records the feedback.
  • Approve with comment (optional) — green button with a comment. Treated as approval. The comment is stored on the step and surfaces in the synthesis report so a reader knows the approver had a note.

What to look at before approving

Different gates need different sanity checks. The four gates in Campaign Launch each have their own "what to look at" checklist (see How to Launch a Campaign). Here's a generic checklist that applies to anything:

  1. Does the proposal match the brief? Open the brief in a separate tab. If the brief said "audit-only" and the proposal is launching ads, something has gone wrong upstream — reject and investigate.
  2. Is the cost reasonable? Approvals on paid actions show the proposed spend explicitly. Cross-reference against your monthly budget. The system enforces a hard cap (lib.spend_cap) but the proposal might still be larger than you'd want.
  3. Does it sound like your brand? This is the most common failure mode. The agents pull voice from your brand profile but a slightly off-brand caption or DM is the most common reason to reject.
  4. Would you be embarrassed if this went out as-is? Final test. If yes, reject. If "mostly fine but I'd tweak X", approve-with-comment and let the comment be the change record — or reject and re-run with notes.

What reject-with-feedback actually does

Today (April 2026), rejection is terminal for the run. The implementation is at ~/projects/tiktok-army/tiktok_army/routers/approvals.py:143:

  • The step is marked failed.
  • The workflow run is marked failed with an error message of Rejected at <step label>: <your reason>.
  • A workflow_run.failed event is emitted to the live stream.
  • The synthesis step does not run.
  • The feedback is preserved on the step's output_jsonb.

To act on the feedback, you submit a new brief — usually with the rejection reason copied into the new brief's notes — and run the workflow again.

This is intentional for now: keeping rejection terminal makes the audit trail clean and predictable. A "rejected with edits, please retry this one step" pattern is on the roadmap (see "the future approver-agent" below).

What the rejection feedback is good for, even today:

  • It's stored on the run forever, so the post-mortem on a failed launch has the actual reason.
  • It feeds into the synthesis report when one runs (it doesn't, on a rejection — but it would on a partial failure where a non-blocking step rejected).
  • It feeds the future approver agent's training set.

The future approver-agent

The roadmap includes an approver agent — a specialized agent that watches your approval/rejection patterns over time and starts proposing edits before you see the gate. The shape of it (subject to change):

  • The agent runs before the human approval card is displayed, reading the proposal and comparing it against your historical accept/reject patterns for similar proposals.
  • If the agent is confident the proposal would be rejected as-is, it pre-generates an edited version reflecting the patterns it's learned, and the dashboard shows you both versions side-by-side.
  • If the agent is confident the proposal would be accepted as-is, it can optionally auto-approve (off by default — opt in per workflow).
  • The agent is itself trained against your written feedback on rejected proposals. Every reject-with-feedback you submit today becomes training signal later.

What this changes for you:

  • Today: act as the QA layer on every gate.
  • Future: act as the QA layer on the agent's edits, which are a tighter starting point — fewer obvious rejects, faster approve loop.

The approver-agent is not yet implemented. You'll see it land as a fifth surface alongside Brief / Workflow / Catalog / Approvals when it's ready.

Things to know

  • Approvals are per-workspace and (today) any workspace member can approve. Role-based gates (only ops-lead can approve ad spend over $X) are on the list but not built yet — the audit log captures the actor field on each approval action, which will be the hook.
  • If you reject a gate in the middle of Campaign Launch, the upstream Studio asset has already rendered (and you've already paid Studio's compute cost). Studio doesn't refund. Plan accordingly.
  • The Approval queue's API is at GET /api/dashboard/approvals and the actions are at POST /api/dashboard/approvals/{step_id}/{approve|reject}. If you need to script approvals from your own tooling, that's the surface.