Languages:
English
العربية
AI Ad Studio is a premium, constrained ad-generation system for product marketing teams, ecommerce brands, app founders, and agencies.
Instead of trying to be a general video editor, this repository focuses on one narrow, high-quality workflow:
brief → concepts → previews → controlled render batches → review → canonical winner → promotion → delivery
That constraint is the product advantage. AI handles concepting, copy, and render planning, while the application enforces quality through templates, validations, approvals, review gates, and production-safe workflows.
The repo now supports three runtime modes for preview and scene-video generation:
Runway only: use Runway for both previews and scene videoHybrid: use Runway for previews and a local inference sidecar for scene videoFully local: use the local inference sidecar for both previews and scene video
Runway is now an optional provider rather than a global requirement. If either PREVIEW_PROVIDER or SCENE_VIDEO_PROVIDER is set to runway, you still need an active paid Runway API subscription and a valid RUNWAYML_API_SECRET.
The current local-model matrix is:
- scene video baseline:
cogvideox1.5-5b-i2v - scene video high-end:
wan2.1-i2v-14b-480p - scene video fallback:
svd-img2vid - preview image default:
flux-schnell - preview image lighter fallback:
sdxl-turbo
- structured product brief capture
- brand kits and reusable templates
- concept generation and storyboard preview flow
- controlled multi-variant render batches
- side-by-side batch review and winner selection
- external reviewer links with comments and approval state
- final decision locking with canonical export selection
- winner-only public promotion workflow
- public campaign pages
- finalized client delivery workspace
- owner-controlled single-export share links
AI Ad Studio is designed for short-form product advertising.
Current repository direction:
- product ad concepts only
- controlled variants instead of open-ended generation
- short exports and platform-aware render presets
- approval and review as first-class workflow steps
- public promotion only after final decision
- delivery workspace only from finalized canonical exports
The repository currently has three public token-based surfaces. They are not interchangeable.
Campaign pages are the primary public promotion surface.
Use them when:
- a reviewed export has been finalized
- the export is the current canonical winner for the project
- the goal is public-facing promotion or showcase-style sharing
Rules:
- winner-only
- canonical-only
- promotion-oriented
Delivery pages are the primary client handoff surface.
Use them when:
- a reviewed export has been finalized
- the export is the current canonical winner for the project
- the goal is structured delivery with handoff notes, approval summary, and downloadable assets
Rules:
- canonical-only
- handoff-oriented
- supports included exports from the finalized batch, but anchored to the canonical export
Share links are a lighter owner-controlled utility surface for a single export.
Use them when:
- you want to quickly share one export for preview or internal distribution outside the main winner-only flow
- you do not need campaign messaging
- you do not need delivery workspace structure or approval summary
Rules:
- single-export utility
- owner-created
- separate from winner-only campaign and canonical delivery workflows
The current repo state supports:
- brief capture, concept generation, and preview flow
- controlled render batch generation
- internal and external review collection
- winner selection and final decision locking
- current-canonical promotion gating
- public campaign pages for canonical winners
- public delivery workspaces for canonical winners
- token-scoped single-export share links
- worker polling, job claiming, and provider-backed generation flow
- token-scoped public media delivery with authenticated owner dashboard downloads
apps/web— Next.js application for product workflow, review, publishing, and deliveryapps/worker— async orchestration and job executionpackages/shared— shared contracts and typespackages/config— runtime configuration utilitiespackages/ui— reusable UI primitivespackages/providers— provider contracts and adapterspackages/media— media pipeline utilities
- Create a project and upload product assets
- Generate controlled concepts
- Generate previews
- Render controlled A/B variation batches
- Review outputs internally and externally
- Select a winner
- Finalize the canonical export
- Promote the finalized winner to showcase or campaign
- Prepare a client delivery workspace
The system follows a thin web layer plus durable database plus async worker model.
- the web app owns product UX, state transitions, approvals, and public pages
- the worker owns slow orchestration, provider calls, and render/composition tasks
- storage and metadata stay durable so long-running jobs can be resumed, audited, and reviewed
- render batches, external review, promotion, and delivery all build on explicit persisted records rather than transient client state
- Node.js 22 or newer
- pnpm 10
- a configured Supabase project
- R2 credentials for asset upload and public media delivery
- OpenAI credentials for text and speech generation flows
- Python 3.11 or newer if you want the local inference sidecar
- an active paid Runway API subscription only if you select
runwayfor previews or scene video
Run pnpm install.
Create a local env file from the example with cp .env.example .env.local.
Fill in the values in .env.local.
These values are required for the authenticated web app and Supabase-backed session handling:
NEXT_PUBLIC_APP_NAMENEXT_PUBLIC_APP_URLNEXT_PUBLIC_SUPABASE_URLNEXT_PUBLIC_SUPABASE_ANON_KEY
These additional server-side values are required for the full product workflow, including token-backed public pages, share links, uploads, downloads, and storage access:
SUPABASE_SERVICE_ROLE_KEYR2_ACCOUNT_IDR2_ACCESS_KEY_IDR2_SECRET_ACCESS_KEYR2_BUCKET_NAME
The worker reads directly from process.env and requires these values to claim and execute jobs:
NEXT_PUBLIC_SUPABASE_URLSUPABASE_SERVICE_ROLE_KEYWORKER_POLL_INTERVAL_MSR2_ACCOUNT_IDR2_ACCESS_KEY_IDR2_SECRET_ACCESS_KEYR2_BUCKET_NAMEOPENAI_API_KEY
These values are optional because the worker code provides defaults:
OPENAI_CONCEPT_MODELdefaults togpt-4o-miniOPENAI_TTS_MODELdefaults togpt-4o-mini-ttsOPENAI_TTS_VOICEdefaults toalloyPREVIEW_PROVIDERdefaults torunwaySCENE_VIDEO_PROVIDERdefaults torunwayRUNWAY_IMAGE_MODELdefaults togen4_image_turboRUNWAY_VIDEO_MODELdefaults togen4_turboLOCAL_INFERENCE_BASE_URLdefaults tohttp://127.0.0.1:8788LOCAL_IMAGE_MODELdefaults toflux-schnellLOCAL_VIDEO_MODELdefaults tocogvideox1.5-5b-i2vLOCAL_DEVICEdefaults tocudaLOCAL_DTYPEdefaults tobf16LOCAL_ENABLE_CPU_OFFLOADdefaults tofalseLOCAL_INFERENCE_TIMEOUT_MSdefaults to900000
The worker chooses preview and scene-video generation independently:
PREVIEW_PROVIDER=runway|local_http|mockSCENE_VIDEO_PROVIDER=runway|local_http
Conditional requirements:
RUNWAYML_API_SECRETis required only if either provider isrunwayLOCAL_INFERENCE_BASE_URLis required only if either provider islocal_httpLOCAL_IMAGE_MODELmatters only whenPREVIEW_PROVIDER=local_httpLOCAL_VIDEO_MODELmatters only whenSCENE_VIDEO_PROVIDER=local_http
| Hardware tier | Preview recommendation | Scene video recommendation | Notes |
|---|---|---|---|
| 12–16GB GPU | sdxl-turbo if needed |
cogvideox1.5-5b-i2v |
Best starting point for mixed-tier machines |
| 24GB+ GPU | flux-schnell |
wan2.1-i2v-14b-480p |
Higher quality, heavier VRAM and runtime cost |
| CPU / macOS | mock or flux-schnell only if practical |
not recommended | Treat local video as unsupported or experimental |
The local sidecar lives in services/local-inference and exposes:
GET /healthPOST /v1/previewPOST /v1/scene-videoGET /v1/artifacts/{artifactId}
Setup:
cd services/local-inference
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txtStart the sidecar from the repo root:
pnpm dev:local-inferenceOr directly:
python3 -m uvicorn app.main:app --app-dir services/local-inference --host 127.0.0.1 --port 8788 --reloadRunway only:
PREVIEW_PROVIDER=runway
SCENE_VIDEO_PROVIDER=runway
RUNWAYML_API_SECRET=your-runway-key
RUNWAY_IMAGE_MODEL=gen4_image_turbo
RUNWAY_VIDEO_MODEL=gen4_turboHybrid:
PREVIEW_PROVIDER=runway
SCENE_VIDEO_PROVIDER=local_http
RUNWAYML_API_SECRET=your-runway-key
LOCAL_INFERENCE_BASE_URL=http://127.0.0.1:8788
LOCAL_VIDEO_MODEL=cogvideox1.5-5b-i2v
LOCAL_DEVICE=cuda
LOCAL_DTYPE=bf16Fully local:
PREVIEW_PROVIDER=local_http
SCENE_VIDEO_PROVIDER=local_http
LOCAL_INFERENCE_BASE_URL=http://127.0.0.1:8788
LOCAL_IMAGE_MODEL=flux-schnell
LOCAL_VIDEO_MODEL=cogvideox1.5-5b-i2v
LOCAL_DEVICE=cuda
LOCAL_DTYPE=bf16
LOCAL_ENABLE_CPU_OFFLOAD=falsePreview-only local / lightweight dev:
PREVIEW_PROVIDER=mock
SCENE_VIDEO_PROVIDER=local_http
LOCAL_INFERENCE_BASE_URL=http://127.0.0.1:8788
LOCAL_VIDEO_MODEL=cogvideox1.5-5b-i2vNext.js will load .env.local automatically for the web app.
The worker does not use a dotenv loader in its current script. It reads from the shell environment. Before starting the worker, export the env values into the shell session that will run it.
One simple local workflow is:
set -a
source .env.local
set +a
After that, start the apps in separate terminals.
Run pnpm dev:web.
Run pnpm dev:worker.
Run pnpm dev:local-inference.
This works only after the worker-required env variables are already exported into the shell:
Run pnpm dev.
Run:
pnpm lint
pnpm test
pnpm build
pnpm typecheck
Or run the full Phase 31 verification wrapper:
pnpm verify:phase-31
Do not run pnpm typecheck and pnpm build in parallel for this repo. Both commands touch .next, and parallel execution can produce false-negative Next type generation errors.
For deployed runtime validation, run:
SMOKE_BASE_URL=https://your-app.example.com pnpm smoke:runtime
To run local verification plus optional deployed smoke checks in one pass:
SMOKE_BASE_URL=https://your-app.example.com pnpm verify:phase-31
Optional smoke inputs:
SMOKE_SHARE_TOKENSMOKE_CAMPAIGN_TOKENSMOKE_DELIVERY_TOKENSMOKE_DELIVERY_EXPORT_IDSMOKE_REVIEW_TOKENSMOKE_CHECK_SHARE_DOWNLOAD=trueSMOKE_CHECK_CAMPAIGN_DOWNLOAD=trueSMOKE_ALLOW_DEGRADED_HEALTH=trueSMOKE_REQUEST_TIMEOUT_MS=15000
/api/health exposes operator-safe readiness booleans for public app URL, Supabase auth, service-role availability, and R2 storage configuration. The smoke script uses that payload to fail early when requested token-surface checks depend on missing runtime configuration.
The smoke command checks /api/health plus any configured public token surfaces and download routes.
To inspect the deployment health payload directly:
curl -sS https://your-app.example.com/api/health
The expected healthy shape is:
{
"name": "AI Ad Studio",
"service": "web",
"status": "ok",
"readiness": {
"publicAppUrlConfigured": true,
"supabaseAuthConfigured": true,
"serviceRoleConfigured": true,
"r2Configured": true
}
}If status is degraded, use the readiness flags to fix the missing runtime dependency before validating public routes:
publicAppUrlConfigured: falsemeansNEXT_PUBLIC_APP_URLis missing or emptysupabaseAuthConfigured: falsemeansNEXT_PUBLIC_SUPABASE_URLorNEXT_PUBLIC_SUPABASE_ANON_KEYis missingserviceRoleConfigured: falsemeansSUPABASE_SERVICE_ROLE_KEYis missingr2Configured: falsemeans one or more ofR2_ACCOUNT_ID,R2_ACCESS_KEY_ID,R2_SECRET_ACCESS_KEY, orR2_BUCKET_NAMEis missing
Do not treat share, campaign, or delivery downloads as production-ready while r2Configured is false.
Web:
pnpm --filter @ai-ad-studio/web build
pnpm --filter @ai-ad-studio/web start
Worker:
pnpm --filter @ai-ad-studio/worker build
pnpm --filter @ai-ad-studio/worker start
- if the public Supabase keys are missing, authenticated web flows and login-dependent pages will not work
- if the R2 variables are missing, upload and download routes will return storage configuration errors
- if the worker-required variables are missing, the worker stays alive and keeps polling for configuration instead of processing jobs
- if a selected provider is
runwayandRUNWAYML_API_SECRETis missing, the worker will refuse to start that configuration - if a selected provider is
local_httpand the sidecar is unreachable, preview or scene-video jobs will fail with a local inference connectivity error - if the local model is too large for the available VRAM, the sidecar will fail during model load or inference; switch to a lighter model or enable CPU offload
- public campaign, share, and delivery media routes rely on token-scoped access plus server-side R2 reads
- owner dashboard export downloads remain authenticated
/api/healthnow exposes operator-safe readiness booleans for auth, service-role access, R2, and public app URL configuration- local video support only changes scene generation; FFmpeg composition remains the final export compositor
Current known limitations and truths:
- the worker still expects its required environment variables to be present in the shell environment that launches it
- token-backed public routes are runtime-safe by design, but they should still be smoke-validated in each deployment environment
- repo smoke coverage is focused on critical business rules and state derivation, not full browser end-to-end automation
- migration application and infrastructure provisioning are assumed to happen outside this repo
- delivery analytics and client acknowledgement flows are not part of Phase 31 and are better handled in Phase 32
- the web runtime needs the public Supabase keys in every environment
- the server-side web runtime also needs
SUPABASE_SERVICE_ROLE_KEYand the R2 credentials for share links, public token routes, and asset delivery - the worker runtime needs Supabase service-role access, R2 credentials, OpenAI credentials, plus whichever media-provider credentials/endpoints match the selected env providers
- promotion, review, delivery, and public token pages assume the database schema and migrations are already applied before the services are started
- if
pnpm smoke:runtimefails because/api/healthisdegraded, fix the missing env vars first instead of debugging public routes - if
r2Configuredisfalse, expect token-backed downloads and storage-backed media delivery to fail even when the pages themselves render - if you only need a temporary diagnostic pass while infrastructure is being wired, rerun with
SMOKE_ALLOW_DEGRADED_HEALTH=true, but do not treat that as release-ready
Before treating the repo as release-candidate ready, verify all of the following:
pnpm lintpnpm testpnpm buildpnpm typecheck
And, when validating a real deployed environment:
SMOKE_BASE_URL=https://your-app.example.com pnpm smoke:runtime
And manually verify:
- an active
/review/[token]page is writable - a finalized or inactive
/review/[token]page is frozen /campaign/[token]plays media without login/delivery/[token]downloads included assets without login/share/[token]still works as a single-export share surface/api/exports/[exportId]/downloadremains protected when logged out
This repository prefers:
- production-grade changes over quick hacks
- small focused modules instead of oversized files
- strong typing and explicit validation
- clean architectural boundaries
- durable workflow records for anything reviewable or long-running
- descriptive commit messages and cohesive pull requests
Read these files before contributing:
CONTRIBUTING.mdCODE_OF_CONDUCT.mdSECURITY.mdSUPPORT.md
Please do not report vulnerabilities in public issues. See SECURITY.md.
This repository is licensed under the MIT License. See LICENSE.






