Local dev — the happy path
The fastest way to get the whole system running locally is mock mode for everything. Both Claude and TikTok providers fall back to deterministic fixtures, so you don't need any API keys.
Backend
cd ~/projects/tiktok-army
# Install (uv handles venv + deps; there is no pip path)
uv sync --extra dev
# Run with mock mode defaults
uv run uvicorn tiktok_army.main:app --reload --port 8000CLAUDE_MODE=mock and TIKTOK_PROVIDER_MODE=mock are the defaults in ~/projects/tiktok-army/tiktok_army/config.py. The placeholder secrets in config.py let Pydantic Settings instantiate without real keys.
The backend will start and respond at http://localhost:8000. Health checks: /healthz, /readyz. OpenAPI docs at /docs.
Dashboard
cd ~/projects/tiktok-army/dashboard
npm install # first time only
TIKTOK_ARMY_API_URL=http://localhost:8000 \
DASHBOARD_WORKSPACE_ID=00000000-0000-0000-0000-000000000000 \
npm run devThe dashboard runs on port 3001 (set in package.json). It proxies API calls to the backend URL via the /api/dashboard/* Next.js route handlers.
DASHBOARD_WORKSPACE_ID is the workspace UUID the dashboard sends in the X-Workspace-Id header for every API call. In dev you can use the all-zeros UUID; production reads from the auth session.
Verify
Hit http://localhost:3001. Submit a brief for @lakucosmetics (the seeded handle in _mock_data.py), pick Profile Audit, run it. You should see the live DAG progress through the steps and end with a Markdown report.
If you don't see anything happen, check:
- •Backend logs for errors (the most common cause is a missing fixture for an agent in
mock_claude._FIXTURES). - •Browser network tab for failed
/api/dashboard/*calls (oftenTIKTOK_ARMY_API_URLis wrong or the backend isn't running). - •That the SSE stream connection at
/api/dashboard/workflows/runs/<id>/streamis open.
Postgres on WSL
If you want real database persistence (so the trace pipeline actually writes rows you can inspect), you need Postgres locally.
# Install on WSL Ubuntu
sudo apt-get update
sudo apt-get install -y postgresql postgresql-contrib
# Start
sudo service postgresql start
# Create role + database
sudo -u postgres psql <<'SQL'
CREATE USER tiktok_army WITH PASSWORD 'tiktok_army';
CREATE DATABASE tiktok_army_dev OWNER tiktok_army;
GRANT ALL PRIVILEGES ON DATABASE tiktok_army_dev TO tiktok_army;
SQL
# Verify
psql postgresql://tiktok_army:tiktok_army@localhost:5432/tiktok_army_dev -c '\l'The default DATABASE_URL in config.py is:
postgresql+asyncpg://tiktok_army:tiktok_army@localhost:5432/tiktok_army_devIf you use a different Postgres setup, override via env:
DATABASE_URL=postgresql+asyncpg://user:pass@host:5432/dbname \
uv run uvicorn tiktok_army.main:app --reloadNote: the postgresql+asyncpg:// scheme is required by SQLAlchemy's asyncpg driver. The wrapper in lib/db.py:_build_engine_url will rewrite postgres:// and postgresql:// URLs automatically.
Running migrations locally
The project's migration files (migration/00_.py) are designed to be copied into Studio's monorepo at infra/migrations/versions/ and run via alembic upgrade <revision> from there. They reference Studio-shared things like trigger_set_updated_at() and the brands table.
For purely local Postgres dev, two options:
- Mirror the Studio monorepo locally — clone Studio, copy the
migration/*.pyfiles into itsversions/dir, runalembic upgrade <head>. This is the closest-to-production path. - Stand up a minimal schema by hand — write a one-off SQL script that creates just the tables this codebase's tests touch (
tiktok_briefs,tiktok_workflows,tiktok_workflow_runs,tiktok_workflow_steps,tiktok_agent_runs,tiktok_agent_steps) plus the sharedtrigger_set_updated_atfunction.
Until the monorepo workflow is documented end-to-end, option 2 is faster for local trace inspection.
Env vars for real mode
To run against real Claude + real TikTok APIs, set these:
| Variable | What it is |
|---|---|
CLAUDE_MODE=real | Switches Claude calls from mock fixtures to Anthropic API |
ANTHROPIC_API_KEY=sk-ant-... | Your Anthropic API key |
TIKTOK_PROVIDER_MODE=real | Switches providers from mock data to TikTok APIs |
TIKTOK_APP_KEY | TikTok app key |
TIKTOK_APP_SECRET | TikTok app secret (used to verify HMAC webhooks) |
TIKTOK_SHOP_API_KEY | TikTok Shop API key |
TIKTOK_RESEARCH_API_KEY | TikTok Research API key |
TIKTOK_OAUTH_ENCRYPTION_KEY | Column-level encryption key for OAuth refresh tokens |
DATABASE_URL | Postgres connection (asyncpg scheme) |
STUDIO_API_URL | Studio Cloud Run URL for service-to-service calls |
GCS_ASSETS_BUCKET | GCS bucket where Studio writes generated assets |
PROJECT_ID | GCP project ID |
REGION | GCP region (default us-central1) |
You can mix modes — e.g., real Claude with mock TikTok providers — by setting only some of them.
GCP deploy
Two deploy scripts, both in the repo:
- •
~/projects/tiktok-army/deploy.sh— backend. - •
~/projects/tiktok-army/dashboard/deploy.sh— dashboard.
Prerequisites
- •
gcloudCLI authenticated against the right project. - •The Artifact Registry repo
axion-studiomust exist in the project. - •The service account
tiktok-army@${PROJECT_ID}.iam.gserviceaccount.commust exist with permissions for: Cloud SQL Client, Pub/Sub Publisher, Secret Manager Accessor, Storage Object Viewer/Creator (for GCS assets), Cloud Tasks Enqueuer. - •Secrets must be created in Secret Manager:
postgres-database-url,anthropic-api-key,tiktok-app-key,tiktok-app-secret,tiktok-shop-api-key,tiktok-research-api-key,tiktok-oauth-encryption-key. - •The VPC connector
axion-studio-dev-vpcmust exist (for private Cloud SQL access). - •The Cloud SQL instance
${PROJECT_ID}:${REGION}:axion-studio-dev-pgmust exist.
Backend deploy
cd ~/projects/tiktok-army
PROJECT_ID=axion-studio-dev REGION=us-central1 bash deploy.shThe script (~/projects/tiktok-army/deploy.sh):
- Builds the Docker image. Important caveat: the
DockerfiledoesCOPY ../../packages/axion_studio/...and only builds correctly when the build context is the Studio monorepo root, not this directory. Runningbash deploy.shfrom this directory will fail unless you've vendored the relevant Studio packages into the build context. The currentdeploy.shrunsgcloud builds submit . --tag …, which works only when this repo is checked out inside Studio's monorepo atproducts/axion-studio/services/tiktok-army/. - Deploys to Cloud Run with VPC connector for private Cloud SQL, secrets via
--set-secrets. - Prints the URL.
Dashboard deploy
cd ~/projects/tiktok-army
TIKTOK_ARMY_API_URL=https://tiktok-army-XXXX-uc.a.run.app bash dashboard/deploy.shIf TIKTOK_ARMY_API_URL is omitted, the script discovers it via gcloud run services describe tiktok-army. Deploy the backend first (or pass the URL explicitly).
The dashboard service runs as a separate Cloud Run service (tiktok-army-dashboard) on port 3001 with --allow-unauthenticated. RBAC is on the dashboard's NextAuth layer, not on Cloud Run.
Post-deploy checks
# Backend health
curl https://tiktok-army-XXXX-uc.a.run.app/readyz
# OpenAPI surface
curl https://tiktok-army-XXXX-uc.a.run.app/docs
# Dashboard
open https://tiktok-army-dashboard-YYYY-uc.a.run.appAfter any web-surface deploy: spawn a Playwright agent to verify the surface — never ask the user to click. The pattern lives at ~/.claude/projects/-home-samuel/memory/qa-agent-pattern.md.
Secrets via Secret Manager
The backend deploy.sh uses --set-secrets to wire Secret Manager secrets to env vars at runtime:
--set-secrets="\
DATABASE_URL=postgres-database-url:latest,\
ANTHROPIC_API_KEY=anthropic-api-key:latest,\
TIKTOK_APP_KEY=tiktok-app-key:latest,\
..."The format is ENV_VAR=secret-name:version. Cloud Run reads the secret at container start and injects it as the env var. The backend's config.py reads from env only — never from .env files — so this is the only injection path.
To rotate a secret:
echo -n 'new-value' | gcloud secrets versions add anthropic-api-key --data-file=-
# Cloud Run picks up :latest on next cold start, or you can force a redeploy:
gcloud run services update tiktok-army --region=us-central1The dashboard service does NOT use --set-secrets — it only takes TIKTOK_ARMY_API_URL and DASHBOARD_WORKSPACE_ID as env vars (see dashboard/deploy.sh).
Migrations runbook
The migration files in this repo live at ~/projects/tiktok-army/migration/. Files 003 through 010 cover the 8 base tables (003), notifications (004), trends (004 — note: there's a slight naming collision; both 004 files exist), listing variants (005), compliance findings (006), audience segments (007), briefs (008), workflows (009), agent steps (010).
The migrations are NOT runnable in-place. They're meant to be copied into the Studio monorepo's infra/migrations/versions/ directory and run with alembic from there.
To apply a new migration
- Write the migration file in
~/projects/tiktok-army/migration/. Follow the pattern in008_tiktok_briefs.py— workspace_id NOT NULL, RLS policy, FORCE ROW LEVEL SECURITY, indexes. - Copy the file into Studio's monorepo
infra/migrations/versions/. - From Studio's monorepo root:
alembic upgrade <revision>. - Verify the schema in Cloud SQL or local Postgres.
- Update Pydantic models in
~/projects/tiktok-army/tiktok_army/models/__init__.pyto match (the migration is the source of truth; models mirror it).
Rolling back
Each migration has a downgrade() function. Run alembic downgrade <previous_revision> from Studio's monorepo. Be aware that some Postgres operations (notably enum value additions) can't be cleanly reversed — the 008_tiktok_briefs.py migration's downgrade drops the type entirely; if you've added new enum values in later migrations they'll be lost.
Things to know
- •Local dev does not need Docker. The
Dockerfileis for Cloud Run only and explicitly references monorepo paths that don't exist locally. - •
uvis the only dependency manager. Don'tpip install— it'll create a parallel environment that diverges fromuv.lock. - •
asyncio_mode=autois set inpyproject.toml. Tests don't need@pytest.mark.asyncio. - •The
axion_studiopackage is imported bylib/spend_cap.pyetc. but commented out inpyproject.toml. It only resolves in the deployed Cloud Run image because the Dockerfile copies it from the monorepo. Locally, anything importingaxion_studio.lib.spend_capwill fail unless you vendor the package ontoPYTHONPATH. - •Models are pinned. Claude model IDs are pinned in
lib/claude.py:_PRICING:claude-opus-4-7,claude-sonnet-4-6,claude-haiku-4-5-20251001. If Anthropic releases a new version, update both the model ID and the pricing table. - •
ffmpegis required at runtime for the transcoding step (Content Producer pattern). The Dockerfile installs it; local dev needs a systemffmpeg(sudo apt-get install ffmpegon WSL).