anvil is a Cloudflare-native CI runner for personal projects and small teams.
The v1 architecture is intentionally split across Cloudflare products by access pattern:
- D1 stores relational control-plane data shared across users and projects.
- KV stores short-lived session state with TTL-based expiry.
- Durable Objects with SQLite store hot coordination state and live run state.
- Queues decouple trigger ingestion from runner execution.
- Sandbox runs builds in isolated Linux environments.
- React provides the operator UI, served from the same Worker application.
anvil is designed around three hard requirements from the start:
- A single public API prefix that can be protected by one WAF rate limit rule.
- Multi-user ownership with single-owner projects in v1.
- Repository-defined pipeline config instead of UI-defined commands.
- Multiple users.
- Multiple projects per user.
- Custom HTTPS Git repositories.
- Manual run trigger.
- Webhook-triggered runs.
- Repository-defined config from
.anvil.yml. - Invite-only access for v1.
- One active run per project.
- Per-project FIFO pending run queue.
- User-initiated cancellation of active or pending runs.
- Live log streaming.
- Strong coordination around run creation and run state.
- Deployments.
- Preview environments.
- SSH Git auth.
- Matrix builds.
- DAG or multi-stage orchestration.
- Warm reusable runners.
- User-specified runner images.
- Artifact browser.
- R2 log archiving.
- Human approval gates.
- Shared multi-user projects and project collaboration beyond future expansion.
- TypeScript
- Cloudflare Workers
- Hono
- React
- Vite
@cloudflare/vite-plugin
Use hono as the Worker HTTP framework and routing layer.
@cloudflare/util-en-garde
All external and internal boundary payloads must be described with util-en-garde codecs and inferred TypeScript types.
If usage patterns are unclear, refer to en-garde.README.md.
- D1 for relational data across users/projects/runs.
- Workers KV for short-lived session state.
- SQLite-backed Durable Objects for project-local and run-local state.
- Drizzle ORM for D1 and Durable Object SQLite access.
All application-level database reads and writes must use drizzle-orm by default.
Use Drizzle's documented APIs for transactional and batched database work where appropriate; see Drizzle transactions and Drizzle batch API.
Raw SQL may be used only when drizzle-orm cannot express the required operation cleanly or when it is absolutely necessary for correctness or performance, and any such usage must be narrowly scoped.
All durable entity IDs use the format:
{prefix}_{base62(uuidv7)}
Examples:
usr_000Ff2k9A6pQzL1cM8xYwRprj_000Ff2m4sC7vTb9Jk2nHdPrun_000Ff2qQw8LmNc3Xy6rStU
Rules:
- the base62 suffix is fixed-width at 22 characters
- the canonical base62 alphabet is
0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz - public IDs are opaque and stable
- this format applies to durable entity IDs such as
usr_,prj_,run_,inv_, andwhk_ - high-entropy security tokens such as session IDs, invite tokens, WebSocket tickets, and webhook secrets do not use this format
-
Worker frontdoor
- API routing
- auth/session checks
- D1 access
- Durable Object RPC invocation
- authenticated WebSocket upgrade routing
- Queue producer
- frontend asset serving
-
ProjectDO
- one object per project
- project-level concurrency and trigger arbitration
- accepted run state and D1 sync/dispatch reconciliation
- webhook configuration and encrypted secret storage
- active and pending run lock state
-
RunDO
- one object per run
- live run metadata
- rolling log storage
- WebSocket fanout for log viewers
- run completion and tail retention
-
Queue consumer
- sandbox creation
- git checkout
- repo config parsing
- sequential command execution
- log streaming into RunDO
-
Sandbox
- isolated build execution per run
The control plane is deliberately divided:
-
Global relational control plane in D1:
- users
- projects
- invites
- run index rows
-
Project-local coordination plane in ProjectDO:
- active run lock
- pending run queue coordination
- accepted run metadata and D1 sync/dispatch retry state
- webhook definitions and encrypted secret material
-
Run-local live plane in RunDO:
- hot status
- step state
- rolling log tail
- WebSocket attachments and tags
This is the core architectural boundary of anvil.
Public identifiers are not Durable Object IDs.
ProjectDOis addressed internally viaidFromName(projectId)RunDOis addressed internally viaidFromName(runId)- Durable Object IDs remain internal implementation details and are never exposed as API identifiers
All non-WebSocket interactions with Durable Objects must use Workers RPC.
- the Worker frontdoor and queue consumer call typed RPC methods on
ProjectDOandRunDOstubs - Durable Objects are internal actors, not general-purpose HTTP handlers for private API routes
- the Worker owns HTTP parsing, request validation, authentication, authorization, and response shaping before invoking RPC
- Durable Object RPC methods receive trusted typed inputs and enforce project-local or run-local invariants
- the log-stream WebSocket upgrade is the only
fetch-based Durable Object path in v1, and the Worker authenticates the upgrade before handing it toRunDO
All public, unauthenticated, or brute-forceable endpoints must live under one shared prefix:
/api/public/*
All authenticated application endpoints must live under:
/api/private/*
A single WAF rate limit rule should protect:
starts_with(http.request.uri.path, "/api/public/")
This one rule is the primary public attack-surface control for:
- login brute force
- session abuse
- webhook spray
- password reset abuse, if added later
POST /api/public/auth/loginPOST /api/public/auth/logoutPOST /api/public/auth/invite/accept(only route that can create a user in v1)POST /api/public/hooks/:provider/:ownerSlug/:projectSlug
Registration is invite-only in v1. There is no open self-signup route.
GET /api/private/meGET /api/private/projectsPOST /api/private/projectsPATCH /api/private/projects/:projectIdGET /api/private/projects/:projectIdGET /api/private/projects/:projectId/runsPOST /api/private/projects/:projectId/runsGET /api/private/projects/:projectId/webhooksPUT /api/private/projects/:projectId/webhooks/:providerPOST /api/private/projects/:projectId/webhooks/:provider/rotate-secretDELETE /api/private/projects/:projectId/webhooks/:providerPOST /api/private/runs/:runId/cancelGET /api/private/runs/:runIdPOST /api/private/runs/:runId/log-ticketGET /api/private/runs/:runId/logs(WebSocket upgrade)POST /api/private/invites
Session records are stored in KV, not D1.
The frontend stores the opaque session identifier in browser localStorage, not cookies.
Each session key is:
- random opaque identifier
- written with
expirationTtl - returned by login and stored in browser
localStorage - sent by the frontend on private requests, typically using an
Authorization: Bearer <sessionId>header - deleted on logout or allowed to expire naturally
Recommended KV value:
{
"userId": "usr_...",
"issuedAt": "2026-03-16T00:00:00.000Z",
"expiresAt": "2026-03-16T06:00:00.000Z",
"version": 1
}Suggested key pattern:
sess:{sessionId}
Session TTL should be short and renewable.
Recommended v1 policy:
- default TTL: 6 hours
- refresh-on-use: refresh when less than 1 hour remains
- delete on logout
KV is eventually consistent across regions. This is acceptable for short-lived opaque sessions, but the design must tolerate:
- logout invalidation not becoming globally visible instantly
- recently-created sessions taking some time to appear in far regions
Mitigations:
- use random session IDs with high entropy
- keep session payload minimal
- do not use KV for authorization data beyond the user ID and expiry
- fetch authorization and project ownership from D1 on private requests
- treat logout as best-effort immediate and globally convergent shortly after
Because the frontend uses localStorage rather than cookies:
- the application avoids ambient cookie attachment and the CSRF exposure tied to cookie-based session transport
- the application must treat XSS resistance as critical because
localStorageis accessible to frontend JavaScript - the frontend must never place the session identifier in URLs, WebSocket query strings, or any other browser-visible location beyond the dedicated auth storage key
- logout must clear in-memory auth state and remove the
localStorageentry immediately - the frontend must enforce a strict Content Security Policy and avoid inline script execution
- run logs and all other untrusted runner output must be rendered as text, not raw HTML
- any rich log formatting such as ANSI colorization must start from escaped text and apply only an allowlisted presentation transform
For v1:
- login must reject users whose
disabled_atis set - private requests must reject sessions whose user row is disabled in D1, even if the KV session has not yet expired
- disabled users cannot create new projects, runs, webhooks, or invites
Password credential rows remain in D1.
Recommended v1 password storage format:
- algorithm:
PBKDF2 - per-user random salt stored alongside the password hash
- iteration count stored alongside the password hash so parameters can be raised later
- derived key length and digest algorithm recorded as metadata if the implementation wants explicit forward compatibility
Suggested columns:
user_idalgorithmdigestiterationssaltpassword_hashupdated_at
The salt is required so identical passwords do not map to identical stored hashes and to make precomputed rainbow tables ineffective.
Not in v1, but the architecture should leave room for:
- OAuth login
- SAML login
Recommended future shape:
- keep local password auth as one provider
- add an
identity_providerstable in D1 later - add
user_identitiesrows mapping users to external providers and stable provider subject IDs - keep
/api/public/auth/*as the public auth ingress prefix so WAF protection remains unchanged
D1 is the global relational source of truth for:
- users
- password credentials
- projects
- project ownership
- run index
- canonical prefixed entity identifiers and owner-scoped slugs
- encrypted user-provided project credentials
- invite tokens
The following should not be stored centrally in D1:
- live run logs
- active-run lock state
- webhook configuration
- encrypted webhook secret material
- live WebSocket connection state
- per-project accepted-run and pending-queue coordination state
Webhook configuration lives in ProjectDO.
anvil should use the D1 Sessions API whenever possible, especially on read-heavy application routes.
Create two D1 helpers:
openReadSession(request, env)openPrimarySession(request, env)
Use when the route is logically read-only.
Behavior:
- read bookmark from request header if present
- call
env.DB.withSession(bookmark ?? "first-unconstrained") - execute all D1 reads through this session
- return the updated bookmark back to the client
Use when the route may write or must start from the latest primary state.
Behavior:
- call
env.DB.withSession("first-primary") - execute D1 read/write operations through this session
- return the updated bookmark back to the client
Use a lightweight browser-visible storage for the D1 bookmark.
Recommended initial approach:
- response header:
x-anvil-d1-bookmark - mirrored into browser
localStorageby the frontend fetch wrapper
The bookmark is not auth material. It is only a consistency token.
These routes should use openReadSession:
GET /api/private/meGET /api/private/projectsGET /api/private/projects/:projectIdGET /api/private/projects/:projectId/runsGET /api/private/projects/:projectId/webhooksGET /api/private/runs/:runIdPOST /api/private/runs/:runId/log-ticketfor ownership verification and ticket minting before WebSocket upgrade
Potentially read-only public routes, if later added:
GET /api/public/auth/sessionGET /api/public/projects/:ownerSlug/:projectSlug/infoif ever exposed
These routes should use openPrimarySession:
POST /api/public/auth/loginPOST /api/public/auth/invite/acceptPOST /api/private/invitesPOST /api/private/projectsPATCH /api/private/projects/:projectIdPOST /api/private/projects/:projectId/runsPUT /api/private/projects/:projectId/webhooks/:providerPOST /api/private/projects/:projectId/webhooks/:provider/rotate-secretDELETE /api/private/projects/:projectId/webhooks/:providerPOST /api/private/runs/:runId/cancelPOST /api/public/hooks/:provider/:ownerSlug/:projectSlug
Any route that only:
- validates session via KV
- checks ownership in D1
- returns data without mutating D1
should use the D1 Sessions API read path.
One ProjectDO exists per project.
- serialize run trigger requests
- allocate
runIdvalues for accepted runs - enforce one-active-run-per-project
- own the per-project FIFO pending run queue in v1
- persist accepted run metadata before D1 sync and queue dispatch succeed
- snapshot the non-secret execution inputs required to execute an accepted run
- act as the single durable reconciler for queue dispatch and D1 run-summary sync
- store webhook definitions and encrypted secrets
- deduplicate webhook deliveries
- return webhook verification material to the Worker and accept verified control-plane actions via RPC
- coordinate run start, cancellation, and lock release
project_id TEXT PRIMARY KEYactive_run_id TEXT NULLupdated_at INTEGER NOT NULL
id TEXT PRIMARY KEYproject_id TEXT NOT NULLrun_id TEXT NOT NULLtrigger_type TEXT NOT NULLtriggered_by_user_id TEXT NULLbranch TEXT NOT NULLcommit_sha TEXT NULLprovider TEXT NULLdelivery_id TEXT NULLrepo_url TEXT NOT NULLconfig_path TEXT NOT NULLposition INTEGER NULLstatus TEXT NOT NULLd1_sync_status TEXT NOT NULLdispatch_status TEXT NOT NULLdispatch_attempts INTEGER NOT NULLlast_error TEXT NULLcreated_at INTEGER NOT NULLcancel_requested_at INTEGER NULL
project_runs is ProjectDO's durable reconciliation ledger for accepted runs.
statustracks ProjectDO's accepted-run and queue-local stated1_sync_statustracks whether the D1 run summary is reconciled for both initial acceptance and terminal completiondispatch_statustracks whether the currently executable run has been queued for execution
At acceptance time, ProjectDO snapshots the non-secret execution inputs required for execution.
- the effective
branch repo_urlconfig_path
Repository credentials are not snapshotted. The queue consumer resolves the latest stored repository token from D1 at execution time.
status values in v1:
pendingexecutableactivecancel_requestedpassedfailedcanceled
Allowed status transitions:
pending -> executablepending -> canceledexecutable -> activeexecutable -> failedexecutable -> canceledactive -> cancel_requestedactive -> passedactive -> failedcancel_requested -> canceledcancel_requested -> failed
d1_sync_status values in v1:
needs_createcurrentneeds_terminal_updatedone
Allowed d1_sync_status transitions:
needs_create -> currentneeds_create -> needs_terminal_updatecurrent -> needs_terminal_updateneeds_terminal_update -> done
dispatch_status values in v1:
blockedpendingqueuedstartedterminal
Allowed dispatch_status transitions:
blocked -> pendingpending -> queuedqueued -> startedblocked -> terminalpending -> terminalqueued -> terminalstarted -> terminal
id TEXT PRIMARY KEYproject_id TEXT NOT NULLprovider TEXT NOT NULLsecret_ciphertext BLOB NOT NULLsecret_key_version INTEGER NOT NULLsecret_nonce BLOB NOT NULLenabled INTEGER NOT NULLcreated_at INTEGER NOT NULLupdated_at INTEGER NOT NULL
id TEXT PRIMARY KEYproject_id TEXT NOT NULLprovider TEXT NOT NULLdelivery_id TEXT NOT NULLrun_id TEXT NULLreceived_at INTEGER NOT NULL
CREATE UNIQUE INDEX idx_project_webhooks_project_provider ON project_webhooks(project_id, provider);CREATE INDEX idx_project_webhooks_provider_enabled ON project_webhooks(provider, enabled);CREATE INDEX idx_project_webhooks_project_enabled ON project_webhooks(project_id, enabled);CREATE UNIQUE INDEX idx_project_webhook_deliveries_project_provider_delivery ON project_webhook_deliveries(project_id, provider, delivery_id);CREATE INDEX idx_project_webhook_deliveries_project_received_at ON project_webhook_deliveries(project_id, received_at);CREATE UNIQUE INDEX idx_project_runs_project_position ON project_runs(project_id, position);CREATE INDEX idx_project_runs_project_status_position ON project_runs(project_id, status, position);CREATE UNIQUE INDEX idx_project_runs_run_id ON project_runs(run_id);
The state table is primary-key driven and does not need extra indexes in v1.
One RunDO exists per run.
- receive live log events from runner
- persist a rolling log tail
- own all log stream WebSockets
- broadcast to viewers
- keep authoritative hot run state during execution
- finalize run completion metadata
- return minimal trusted run metadata to the Worker when a newly accepted
runIdis not yet visible in D1 - expose run-state and log mutation operations via RPC
RunDO is authoritative for active run state and recent run detail. D1 run_index is the durable query/index layer and may lag while a run is active.
Its fetch handler is reserved for the Worker-authenticated WebSocket upgrade path.
id TEXT PRIMARY KEYproject_id TEXT NOT NULLstatus TEXT NOT NULLtrigger_type TEXT NOT NULLbranch TEXT NOT NULLcommit_sha TEXT NULLcurrent_step INTEGER NULLstarted_at INTEGER NULLfinished_at INTEGER NULLexit_code INTEGER NULLerror_message TEXT NULL
id TEXT PRIMARY KEYrun_id TEXT NOT NULLposition INTEGER NOT NULLname TEXT NOT NULLcommand TEXT NOT NULLstatus TEXT NOT NULLstarted_at INTEGER NULLfinished_at INTEGER NULLexit_code INTEGER NULL
id TEXT PRIMARY KEYrun_id TEXT NOT NULLseq INTEGER NOT NULLstream TEXT NOT NULLchunk TEXT NOT NULLcreated_at INTEGER NOT NULL
CREATE UNIQUE INDEX idx_run_logs_run_seq ON run_logs(run_id, seq);CREATE INDEX idx_run_logs_run_created_at ON run_logs(run_id, created_at);CREATE UNIQUE INDEX idx_run_steps_run_position ON run_steps(run_id, position);CREATE INDEX idx_run_meta_project_started_at ON run_meta(project_id, started_at);
The most common queries in RunDO are:
- fetch latest log tail for one run
- append ordered log chunks
- fetch ordered steps for one run
These indexes are designed specifically for those patterns.
This is a first-class design decision, not an implementation detail.
Run log streaming must use the Durable Object WebSocket Hibernation API.
CI logs are bursty:
- large bursts while commands are active
- idle gaps during install, network wait, or subprocess silence
- viewers can remain attached for long periods
Hibernation is the right fit because:
- clients stay connected while the object is evicted from memory
- the object wakes automatically on the next event
- duration charges do not accrue while the object is sleeping
- anvil does not need to pin a RunDO in memory just because a browser tab is open
RunDO must use:
ctx.acceptWebSocket(ws)ctx.getWebSockets()ws.serializeAttachment(...)ws.deserializeAttachment()ctx.setWebSocketAutoResponse(...)
Each WebSocket attachment should store:
runIduserIdconnectedAtlastAckedSeqif incremental replay is later added
When RunDO wakes after hibernation:
- constructor runs again
- in-memory state is rebuilt from SQLite and socket attachments
- attached sockets are recovered via
ctx.getWebSockets() - replay state must not depend on old memory
For v1:
- keep a bounded rolling log tail in RunDO SQLite
- cap retained hot log storage at 2 MiB per run
- on new WebSocket connection, replay the recent tail
- then stream live events
Full log archival is deferred to future R2 integration.
Use auto-response for ping/pong-style keepalive traffic so idle viewers do not wake the object unnecessarily.
Browser WebSocket clients cannot attach an Authorization header during the upgrade flow. For v1, anvil uses a short-lived log-stream ticket stored in KV.
- authenticated client calls
POST /api/private/runs/:runId/log-ticket - Worker validates session identity and run ownership before minting the ticket
- the ticket is stored in KV with
runId,userId, and expiry metadata - the ticket is best-effort single-use and should expire after 60 seconds
- the browser connects using
GET /api/private/runs/:runId/logs?ticket=... - the Worker validates and consumes the ticket before forwarding the upgrade to
RunDO - strict global single-use is not required in v1 because KV is eventually consistent; the security boundary is short TTL plus binding the ticket to
runIdanduserId - the Worker forwards trusted authenticated upgrade metadata to
RunDO;RunDOmust not treat the browser query string as auth material - session identifiers must never appear in WebSocket query strings
Queues provide durable handoff between trigger ingestion and execution.
Each queue message contains:
{
"projectId": "prj_...",
"runId": "run_..."
}A queue message is a delivery hint, not the source of truth for scheduling.
Cloudflare Queues do not provide strict FIFO delivery guarantees, so v1 must not rely on queue delivery order to preserve per-project execution order.
ProjectDO is authoritative for:
- whether a run is still pending
- whether a run is currently active
- whether a run has been canceled
- which pending run is next in FIFO order
The queue consumer must re-check ProjectDO before starting work and must no-op stale, duplicate, canceled, or superseded queue messages.
A run is considered accepted once ProjectDO durably writes the accepted run record to its local SQLite state.
ProjectDOallocates the canonicalrunId- the accepted run record snapshots the non-secret execution inputs required for execution
- the accepted run record is written before D1 sync and queue enqueue are required to succeed
- the API returns
202 AcceptedwithrunIdafter theProjectDOcommit succeeds - D1
run_indexcreation is a post-acceptance reconciliation step - queue enqueue is a post-acceptance reconciliation step only when the accepted run is currently executable
For v1:
- maximum pending accepted runs per project: 20
ProjectDOis the single durable reconciler for queue dispatch and D1 run-summary syncProjectDOis also the durable watchdog owner for an active accepted run until terminalization is confirmed- only the currently executable run should have a queue message enqueued
- accepted runs behind an active run remain only in the ProjectDO FIFO queue until promoted
- when
ProjectDOpromotes the next pending run to executable, exactly one queue message should be enqueued for that run - queue enqueue failures before execution begins should be retried from
ProjectDOwith bounded exponential backoff - D1 sync failures for both initial acceptance and terminal completion should be retried from
ProjectDOusing an alarm or equivalent retry mechanism - if dispatch retries are exhausted before sandbox execution begins, the run is marked
failedwith a system reason such asdispatch_failed - while a run is active, the queue consumer must periodically heartbeat execution progress to
ProjectDO - if the heartbeat becomes stale before a terminal update is recorded,
ProjectDOmarks the runfailedwith a system reason such asrunner_lost, reconciles D1, releases the active lock, and advances the queue - once a sandbox has started, anvil does not automatically rerun the build on worker-side failure; it only finalizes the accepted run
For v1:
- queue consumer invocations have a 15 minute wall-clock limit on Cloudflare
- the queue consumer Worker should run on a paid plan with
limits.cpu_msset to 300000 - whole-run timeout must stay below the queue consumer wall-clock limit so checkout, reconciliation, and cleanup have headroom
- the queue consumer should use Sandbox SDK WebSocket transport to avoid per-operation subrequest pressure
- active CI sandboxes should use
keepAlive: trueand must always be explicitly destroyed
- load project summary from D1, including the latest encrypted repository credential metadata if present
- call ProjectDO RPC to confirm run ownership and queue state and retrieve the accepted-run execution snapshot
- no-op the message if ProjectDO reports the run is stale, duplicate, canceled, already completed, or not the current executable run
- treat a message for a non-executable run as an unexpected but tolerated stale delivery and emit a structured log or metric before acknowledging it
- create Sandbox with
keepAlive: true - use the Sandbox SDK to check out the repository inside the Sandbox
- load the repository config from the snapshotted
config_path - validate config with
util-en-garde - transition the run through
startingandrunningin RunDO via RPC - create step rows in RunDO via RPC
- start heartbeat updates to
ProjectDOwhile the run is active - run named steps sequentially
- stream output to RunDO via RPC using batched/coalesced log appends rather than one-row-per-small-fragment writes
- finalize run in RunDO via RPC and report the terminal summary back to ProjectDO
- let ProjectDO perform or retry the D1 run-summary sync
- release the project lock in ProjectDO via RPC
- advance the ProjectDO FIFO queue via RPC if another pending run exists and enqueue exactly one queue message for the newly promoted executable run
- destroy sandbox in
finally
The queue consumer is responsible for best-effort cleanup on:
- sandbox startup failure
- checkout failure
- config parse failure
- command non-zero exit
- worker-side exception
RunDO should still receive a terminal state update for all of those paths. If a command timeout or cancellation occurs, the queue consumer must explicitly terminate the underlying Sandbox process or session and must not assume the SDK timeout alone has stopped execution.
The runner model must make cancellation explicit.
Each executing build step must run in a way that exposes a controllable Sandbox process or session handle.
Required semantics:
- soft cancel attempts to stop execution at the running process boundary or via a graceful process signal
- hard cancel escalates by explicitly killing the Sandbox process group, session, or sandbox when graceful shutdown does not complete within 30 seconds
- command timeout alone is not sufficient as a cancellation mechanism; the implementation must actively terminate the underlying process or session because Sandbox SDK command timeouts only end the caller-side wait
- the next FIFO run must not be promoted until the active run is confirmed stopped
v1 uses one platform-owned runner image. Repositories cannot choose or override the runner image in .anvil.yml.
Recommended image source:
docker/runner.Dockerfile
Recommended Dockerfile:
ARG SANDBOX_VERSION=0.7.0
FROM docker.io/cloudflare/sandbox:${SANDBOX_VERSION}-python
ENV DEBIAN_FRONTEND=noninteractive \
CI=1 \
COREPACK_ENABLE_DOWNLOAD_PROMPT=0 \
NPM_CONFIG_UPDATE_NOTIFIER=false \
NPM_CONFIG_FUND=false \
PNPM_HOME=/opt/pnpm \
PATH=/opt/pnpm:$PATH
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential \
curl \
file \
git \
git-lfs \
jq \
pkg-config \
procps \
rsync \
unzip \
wget \
xz-utils \
zip \
&& rm -rf /var/lib/apt/lists/*
RUN corepack enable \
&& corepack prepare pnpm@9.15.0 --activate \
&& corepack prepare yarn@4.6.1 --activate
WORKDIR /workspaceRunner contract for v1:
- Ubuntu 22.04-based Cloudflare Sandbox image
- Node.js 20 LTS with npm
- Bun from the Cloudflare base image
- Python 3.11 with
pipandvenv pnpmandyarnviacorepack- common CI utilities including
git,git-lfs,curl,wget,jq,zip,unzip,file,procps,rsync,build-essential, andpkg-config
The Docker base image version must stay in lockstep with the @cloudflare/sandbox npm package version used by the application.
Repository checkout in v1 should use a deliberately narrow policy.
Allowed repository URL policy:
- repository URLs must use
https:// - the host must be a normal DNS hostname
- embedded credentials in the URL are rejected
- query strings and fragments are rejected
- explicit non-default ports are rejected
localhost, loopback hosts, and IP-literal hosts are rejected- standard TLS validation is required; self-signed or private CA repositories are unsupported in v1
Private repository credential handling in v1:
- each project may store one encrypted repository token in D1
- the token is decrypted only for clone or fetch operations
- the queue consumer may construct an in-memory credentialed HTTPS URL in the provider's supported PAT format and pass it directly to
sandbox.gitCheckout(...) - the credentialed URL is an ephemeral runtime value only and must never be stored in D1, ProjectDO, RunDO,
.git/config, or any persisted repository config files - the clean repository URL stored in D1 must remain uncredentialed
- checkout failures and runner logs must redact credentialed URLs and tokens before they are emitted or returned
- the token must never appear in structured logs, user-visible error messages, or persisted configuration
The v1 checkout model is intentionally limited to keep repository access predictable and avoid leaking credentials through common git transport surfaces.
Pipeline configuration is repository-defined.
.anvil.yml
Projects may store a custom path in D1, for example:
.config/anvil.ymlci/anvil.yml
For v1, config_path must be repo-relative. Absolute paths and path traversal such as .. are rejected.
version: 1
checkout:
depth: 1
run:
workingDirectory: .
timeoutSeconds: 720
steps:
- name: install
run: npm ci
- name: test
run: npm test
- name: build
run: npm run buildv1 step shape is intentionally minimal:
namerun
run.timeoutSeconds is a whole-run timeout, not a per-step timeout.
For v1:
- maximum config file size: 64 KiB
- maximum step count: 20
- maximum step name length: 64
- maximum step command length: 4096 bytes
- maximum
run.timeoutSeconds: 720 workingDirectorymust be repo-relative- absolute paths are rejected
- path traversal such as
..is rejected
The config file is validated after checkout.
If validation fails:
- the run is marked failed
- a structured
warnorerrorlog line is emitted - no build commands are executed
- unknown top-level fields must be rejected
- unknown step-level fields must be rejected
- config values exceeding the v1 limits above must be rejected
v1 keeps repository config intentionally small, but the schema should leave room for future expansion such as:
- environment variables
- cache hints
- artifact declarations
- image selection
- conditional steps
These are not implemented in v1. The v1 runner image is platform-owned and cannot be selected from .anvil.yml.
All D1 id columns use the canonical prefixed identifier format defined in section 3.4.
id TEXT PRIMARY KEYslug TEXT NOT NULL UNIQUEemail TEXT NOT NULL UNIQUEdisplay_name TEXT NOT NULLcreated_at INTEGER NOT NULLdisabled_at INTEGER NULL
users.slug is the canonical owner slug.
Indexes:
CREATE UNIQUE INDEX idx_users_slug ON users(slug);CREATE UNIQUE INDEX idx_users_email ON users(email);
user_id TEXT PRIMARY KEYalgorithm TEXT NOT NULLdigest TEXT NOT NULLiterations INTEGER NOT NULLsalt BLOB NOT NULLpassword_hash BLOB NOT NULLupdated_at INTEGER NOT NULL
id TEXT PRIMARY KEYowner_user_id TEXT NOT NULLowner_slug TEXT NOT NULLproject_slug TEXT NOT NULLname TEXT NOT NULLrepo_url TEXT NOT NULLdefault_branch TEXT NOT NULLconfig_path TEXT NOT NULL DEFAULT '.anvil.yml'repo_token_ciphertext BLOB NULLrepo_token_key_version INTEGER NULLrepo_token_nonce BLOB NULLcreated_at INTEGER NOT NULLupdated_at INTEGER NOT NULL
projects.owner_slug is a denormalized copy of users.slug kept for owner-scoped lookup efficiency.
Indexes:
CREATE UNIQUE INDEX idx_projects_owner_project_slug ON projects(owner_slug, project_slug);CREATE INDEX idx_projects_owner_user_updated_at ON projects(owner_user_id, updated_at DESC);CREATE INDEX idx_projects_updated_at ON projects(updated_at DESC);
id TEXT PRIMARY KEYcreated_by_user_id TEXT NOT NULLtoken_hash BLOB NOT NULLexpires_at INTEGER NOT NULLaccepted_by_user_id TEXT NULLaccepted_at INTEGER NULLcreated_at INTEGER NOT NULL
Indexes:
CREATE UNIQUE INDEX idx_invites_token_hash ON invites(token_hash);CREATE INDEX idx_invites_created_by_created_at ON invites(created_by_user_id, created_at DESC);CREATE INDEX idx_invites_expires_at ON invites(expires_at);
id TEXT PRIMARY KEYproject_id TEXT NOT NULLtriggered_by_user_id TEXT NULLtrigger_type TEXT NOT NULLbranch TEXT NOT NULLcommit_sha TEXT NULLstatus TEXT NOT NULLqueued_at INTEGER NOT NULLstarted_at INTEGER NULLfinished_at INTEGER NULLexit_code INTEGER NULL
run_index is the last-synced durable summary in D1. While a run is active, RunDO remains authoritative and D1 status may lag. Immediately after acceptance, the D1 row may be temporarily absent until ProjectDO reconciliation succeeds.
Indexes:
CREATE INDEX idx_run_index_project_queued_at ON run_index(project_id, queued_at DESC);CREATE INDEX idx_run_index_project_started_at ON run_index(project_id, started_at DESC);CREATE INDEX idx_run_index_user_queued_at ON run_index(triggered_by_user_id, queued_at DESC);CREATE INDEX idx_run_index_status_queued_at ON run_index(status, queued_at DESC);
- list projects for current user
- fetch one project by owner-scoped slug or id
- resolve owner-scoped public webhook routes efficiently
- list recent runs for one project using keyset pagination, not offset pagination
- list recent runs initiated by one user using keyset pagination, not offset pagination
- fetch one run summary by id
- create and redeem invite tokens efficiently
For every private route:
- Read the session identifier from the request, typically from the
Authorizationheader. - Resolve session in KV.
- Reject if missing or expired.
- Open D1 session.
- Read project ownership or resource ownership from D1.
- Validate request payload and derive the target Durable Object public ID if applicable.
- If Durable Object state is needed, invoke the target object via RPC using trusted typed inputs.
- Shape the HTTP response in the Worker.
The session in KV identifies the user. The authoritative authorization checks still happen in D1. Durable Objects must not read browser session headers or perform primary authentication for private routes.
runId may exist before its D1 run_index row is visible because ProjectDO accepts the run before reconciliation completes.
For private run-scoped routes such as:
GET /api/private/runs/:runIdPOST /api/private/runs/:runId/cancelPOST /api/private/runs/:runId/log-ticket
the Worker should:
- validate the session via KV
- attempt to resolve the run from D1
run_index - if the D1 row is missing, call
RunDOusingrunIdto fetch minimal trusted metadata such asprojectIdand current run status - authorize the caller by checking project ownership in D1 using that
projectId - continue with the route-specific logic
This preserves D1 as the source of authorization while allowing newly accepted runs to be queried or canceled immediately.
GET /api/private/runs/:runId/logs is authenticated by short-lived log-stream ticket rather than by Authorization header.
- Client calls
POST /api/private/runs/:runId/log-ticket. - Worker validates the session via KV.
- Worker checks run ownership via D1 or, if needed during reconciliation lag, via the
RunDO-assisted ownership flow above. - Worker stores a short-lived best-effort single-use ticket in KV.
- Client opens
GET /api/private/runs/:runId/logs?ticket=.... - Worker validates and consumes the ticket.
- Worker forwards the authenticated WebSocket upgrade to
RunDO. RunDOattaches the socket using trusted Worker-provided auth context.
Webhook configuration is owned by ProjectDO, not D1.
- Request arrives at
/api/public/hooks/:provider/:ownerSlug/:projectSlug. - Worker frontdoor resolves
(ownerSlug, projectSlug) -> projectIdfrom D1. - Worker frontdoor derives
ProjectDOfromidFromName(projectId). - Worker calls
ProjectDORPC to load the minimal webhook verification material and project-local webhook settings for that provider. - Worker authenticates the incoming webhook request using the provider signature scheme.
- Worker normalizes the verified event and applies event-type and branch policy checks.
- Worker calls
ProjectDORPC to deduplicate the delivery and accept the verified trigger. - If accepted,
ProjectDOallocatesrunIdand durably records the accepted run. - Worker attempts to write the D1 summary row and, if the run is currently executable, enqueue it.
- If D1 sync or enqueue fails,
ProjectDOretains reconciliation state and retries later.
Supported providers in v1:
- GitHub
- GitLab
- Gitea
Webhook management scope in v1:
- users configure provider webhooks manually in the upstream provider UI
- anvil stores provider verification material and enablement state
- anvil does not create, update, or delete provider webhooks through provider APIs in v1
Webhook trigger policy in v1:
- only
pushevents create runs - only pushes to
projects.default_branchcreate runs - provider ping/test events return success but do not create runs
- duplicate webhook deliveries must be deduplicated by
(project_id, provider, delivery_id)for 72 hours - manual triggers are not deduplicated
The webhook config itself is not in D1. Only the stable mapping from public owner-scoped slug to project identity is in D1.
Slug policy for v1:
- allowed characters: alphanumeric, hyphen (
-), underscore (_) - user slug is chosen once at signup
- rename flow is deferred
- project slug is unique within an owner scope
This keeps webhook secrets localized to the project actor while preserving simple public routing and a Worker-owned authentication boundary.
- Authenticated user calls
POST /api/private/projects. - Worker uses D1 primary session.
- Worker inserts
projectsrow with owner identity. - Worker returns project summary.
- ProjectDO is created lazily on first use.
- Authenticated user calls
POST /api/private/projects/:projectId/runs. - Worker validates session via KV.
- Worker checks project ownership via D1.
- Request may include an optional branch override; if omitted, anvil uses
projects.default_branch. - Worker calls
ProjectDORPC to accept the run. ProjectDOallocatesrunId, snapshots the non-secret execution inputs for the run, records the accepted run, and initializesRunDO.- Worker returns
202 AcceptedwithrunId. - Worker attempts to insert the
run_indexrow in D1. - Worker enqueues the run only if
ProjectDOreports that it is currently executable. - If D1 sync or enqueue fails,
ProjectDOretries reconciliation asynchronously.
Users may cancel:
- the active run for a project
- any pending run in that project's FIFO queue
Cancellation is requested through:
POST /api/private/runs/:runId/cancel
Behavior:
- if the run is pending, Worker authorizes the caller and then invokes ProjectDO RPC to remove it from the FIFO queue and mark it canceled
- if the run is active, repeated cancel requests are idempotent and do not create a second cancellation workflow
- if the run is active, Worker authorizes the caller and then invokes ProjectDO and RunDO via RPC; anvil first attempts a soft cancel at the running process boundary
- if soft cancel does not complete in time and the runtime allows it, anvil escalates to a hard kill of the sandbox process or session
- RunDO transitions the run toward canceled and ProjectDO advances the next queued run
- ProjectDO reconciles D1
run_indexto terminal statuscanceled
- Public webhook request hits WAF-protected prefix.
- Worker resolves project identity in D1 using owner-scoped slug.
- Worker calls
ProjectDORPC to load verification material for the provider. - Worker authenticates the webhook request and validates provider event type, default-branch policy, and delivery idempotency preconditions.
- If the delivery should create a run, Worker calls
ProjectDORPC to accept it and append it to the per-project FIFO queue. ProjectDOallocatesrunId, snapshots the non-secret execution inputs for the run, and initializesRunDO.- Worker attempts to insert the
run_indexrow in D1. - Worker enqueues the run only if
ProjectDOreports that it is currently executable. - If D1 sync or enqueue fails,
ProjectDOretries reconciliation asynchronously.
- Queue consumer receives
{projectId, runId}. - Queue consumer confirms with
ProjectDOthat the run is still the current executable run for the project and retrieves the accepted-run execution snapshot. - If
ProjectDOreports the message is stale, duplicate, canceled, or not executable, the consumer acknowledges it without creating a Sandbox. - Queue consumer creates Sandbox with
keepAlive: true. - Queue consumer uses the accepted-run snapshot and the latest repository token to check out the repository inside the Sandbox.
- Sandbox loads the snapshotted config path.
- Queue consumer calls
RunDORPC to transition the run fromqueuedtostarting. - Validated commands are written to RunDO step rows through RPC.
- Queue consumer starts heartbeat updates to
ProjectDOand then callsRunDORPC to transition the run torunning. - Commands execute sequentially.
- Output chunks stream to
RunDOthrough RPC. - RunDO broadcasts to viewers.
- Terminal state is written to
RunDOthrough RPC and reported back toProjectDO. ProjectDOupdates or retries the D1run_indexterminal sync.- Queue consumer calls
ProjectDORPC to release the lock. - Queue consumer calls
ProjectDORPC to advance the next FIFO pending run, if any, and enqueue exactly one queue message for the newly promoted executable run. - Sandbox is destroyed in
finally.
Frontend lives under src/web and is served by the Worker.
Recommended stack:
- React Router
- TanStack Query
- typed API wrapper consuming
util-en-gardecontracts - frontend auth wrapper storing the session identifier in browser
localStorage - frontend D1 bookmark wrapper storing the latest read-replication bookmark in browser
localStorage - log stream wrapper that mints short-lived tickets before opening the WebSocket
/app/projects/app/projects/new/app/projects/:projectId/app/runs/:runId/app/login
- show projects owned by current user
- show last known run status
- repo URL
- default branch
- config path
- recent runs
- trigger run button
- webhook summary
- pending queue summary
- run status
- step list
- live log panel
- reconnecting log stream client
Shared contracts live under src/contracts.
Recommended files:
auth.tsproject.tsrun.tswebhook.tsrepo-config.tslog.tscommon.ts
LoginRequestLoginResponseCreateProjectRequestProjectSummaryProjectDetailTriggerRunRequestRunSummaryRunDetailLogStreamTicketResponseWebhookSummaryUpsertWebhookRequestWebhookTriggerPayloadRepoConfigLogEvent
anvil/
src/
contracts/
auth.ts
project.ts
run.ts
webhook.ts
repo-config.ts
log.ts
common.ts
worker/
index.ts
env.ts
api/
public/
auth.ts
webhooks.ts
private/
me.ts
projects.ts
runs.ts
webhooks.ts
auth/
headers.ts
sessions.ts
passwords.ts
tickets.ts
durable/
project-do.ts
run-do.ts
queue/
consumer.ts
messages.ts
sandbox/
runner.ts
git.ts
repo-config.ts
commands.ts
db/
d1/
schema/
repositories/
durable/
schema/
repositories/
migrate.ts
services/
project-service.ts
run-service.ts
webhook-service.ts
id-service.ts
client/
main.tsx
app.tsx
router.tsx
pages/
components/
lib/
drizzle/
d1/
durable/
docker/
runner.Dockerfile
public/
wrangler.jsonc
package.json
tsconfig.json
Workflows are not part of v1 execution, but the specification should leave room for them.
- multi-stage pipelines
- retries across long-running steps
- durable approval gates
- scheduled retries or backoff across external systems
- artifact publication or promotion flows
- long waits for external events
v1 uses:
- Worker frontdoor
- Queue
- ProjectDO
- RunDO
- Sandbox
A future v2 may add Workflows as an orchestration layer above the queue consumer:
- trigger accepted
- workflow started
- workflow step starts sandbox
- workflow step waits for completion event
- workflow step publishes artifacts or notifies external systems
If Workflows are added later, every step must be designed idempotently.
R2 is not in v1, but the specification should reserve a clear role for it.
- full run log archival
- uploaded artifacts
- test reports
- compressed logs for completed runs
- build outputs too large for Durable Object SQLite retention
- RunDO SQLite retains only a bounded hot tail for live UI and recent history.
- R2 stores immutable completed-run log archives.
Suggested key patterns:
logs/{projectId}/{runId}.txtlogs/{projectId}/{runId}.jsonlartifacts/{projectId}/{runId}/{artifactName}
If R2 is added later, add D1 tables such as:
run_archivesrun_artifacts
v1 does not implement these.
- one active run per project
- FIFO pending queue per project
v1 supports:
- one active run per project
- FIFO pending queue for additional accepted runs
- user-initiated cancellation of active runs
- user-initiated cancellation of pending runs
ProjectDO is responsible for queue mutation, cancellation, and advancement.
ProjectDO is the sole owner of project-level concurrency state.
No other component should attempt to coordinate active-run state or pending-queue state outside ProjectDO.
Every accepted run must end in exactly one terminal state:
passedfailedcanceled
The canonical run status enum for v1 is:
queuedstartingrunningcancel_requestedcancelingpassedfailedcanceled
RunDO should expose the freshest status. D1 run_index.status uses the same enum but may lag for active runs.
pending is an internal ProjectDO queue concept in v1, not a public run status. Public APIs and persisted run summaries should use only the canonical status enum above.
v1 should allow only these transitions:
queued -> startingqueued -> canceledstarting -> runningstarting -> failedstarting -> cancel_requestedrunning -> passedrunning -> failedrunning -> cancel_requestedcancel_requested -> cancelingcancel_requested -> canceledcanceling -> canceledcanceling -> failedif forced termination or cleanup fails after cancellation has begun
Terminal states do not transition further.
- queue consumer destroys sandbox
- RunDO finalizes run status
- ProjectDO reconciles and retries D1
run_indexupdates - ProjectDO releases active lock
For v1:
project_webhook_deliveriesrows should be retained for 72 hours- terminal
project_runsrows that are fully reconciled to D1 should be pruned after 7 days RunDOdetail state (run_meta,run_steps, and the retained hot log tail) should be retained for 7 days after terminal completion
After RunDO detail retention expires:
- D1
run_indexremains the durable summary source GET /api/private/runs/:runIdshould still return the D1 summary if it exists- the response should indicate that detailed run state is no longer available
For v1, enforce:
run.timeoutSecondsfrom repo config as the user-visible whole-run timeoutrun.timeoutSecondsmust not exceed 720- the configured run timeout must leave headroom within the queue consumer's 15 minute wall-clock limit for checkout, reconciliation, cancellation, and cleanup
- the queue consumer Worker should run with
limits.cpu_msset to 300000 - internal platform safety timeouts may exist, but they are implementation details rather than user-configurable step timeouts
ProjectDO must use an alarm or equivalent watchdog mechanism to detect stale active-run heartbeats and recover orphaned runs in v1.
Structured level logging is required in v1.
All runtime components should emit structured JSON logs:
- Worker frontdoor
ProjectDORunDO- queue consumer
Required log levels:
debuginfowarnerror
Minimum required fields on every log event:
tsleveleventcomponent
Include these contextual fields whenever available:
requestIdprojectIdrunIduserIdqueueMessageIdproviderdeliveryIdattemptstatuserrorCode
Required structured log events include at least:
- run acceptance
- queue dispatch retry
- D1 sync retry
- stale queue delivery
- sandbox startup failure
- checkout failure
- config validation failure
- run cancellation request
- cancel escalation to hard kill
- watchdog recovery of an orphaned run
Structured logs must never contain:
- repository tokens or PATs
- session identifiers
- webhook secrets
- log-stream tickets
- raw
Authorizationheaders - credentialed repository URLs
- webhook secrets live in ProjectDO SQLite as encrypted blobs
- user-provided repository tokens are encrypted before being stored in D1
- stored repository tokens are used for Git access only in v1
- plaintext repository tokens and webhook secrets are never persisted in D1, KV, or Durable Object SQLite
- short-lived WebSocket log-stream tickets live in KV
- password hashes are derived with PBKDF2 using a per-user random salt
anvil should support storing one user-provided repository token per project in D1 using application-level encryption.
Recommended v1 design:
- one global app master key in the Worker environment
- the master key has a monotonically increasing integer version
- when a user saves a token, anvil encrypts it before writing to D1
- the D1 project row stores ciphertext plus the key version and nonce/IV
- reads decrypt using the master key matching the stored version
- future key rotation is performed by introducing a new version and re-encrypting rows over time
Suggested storage fields per encrypted token:
repo_token_ciphertextrepo_token_key_versionrepo_token_nonce
The exact cipher can be implementation-defined, but it should be an authenticated encryption mode. The important invariant for the specification is that token plaintext never lands in the database.
anvil should support storing one webhook secret per provider per project in ProjectDO SQLite using the same application-level encryption model.
Recommended v1 design:
- use the same master-key versioning strategy as encrypted repository tokens
- encrypt the webhook secret before writing it to
project_webhooks - store ciphertext plus the key version and nonce/IV alongside the webhook row
- decrypt only in the Worker-owned webhook verification path before invoking
ProjectDOacceptance RPC
Suggested storage fields per encrypted webhook secret:
secret_ciphertextsecret_key_versionsecret_nonce
- all public routes under
/api/public/* - single WAF rate limit rule on that prefix
- login and webhook ingress share the same outer rate limit boundary
- KV authenticates session identity
- D1 authorizes project ownership in v1
- the Worker authenticates and authorizes both private API requests and public webhook requests before invoking Durable Objects
- ProjectDO and RunDO enforce only trusted RPC invariants and object-local state transitions
- private API requests carry the opaque session identifier explicitly rather than relying on browser cookies
- WebSocket log streaming is authorized by short-lived best-effort single-use KV ticket after D1 ownership verification
v1 is invite-only.
Recommended D1 table:
invites- hashed invite token
- inviter user id
- expiry
- accepted by user id
- accepted at
v1 invite semantics:
- any registered user may generate an invite link
- invite links carry a simple opaque token
- the stored database value should be a hash of that token, not the raw token itself
- only a valid invite token allows a new user record to be created in v1
- v1 does not impose per-user invite caps or invite-specific application rate limits beyond normal authenticated route protections
Implementation should be split into separate backend and frontend tracks. Each phase should deliver a coherent product slice and minimize dependencies on unfinished work in other phases.
- repo skeleton
- shared contracts under
src/contracts .anvil.ymlschema- D1 schema + Drizzle setup
- canonical ID generator and prefix conventions
- structured logger foundation
- KV session helper
- login route
- private auth middleware
- invite generation and invite acceptance flow
GET /api/private/meGET /api/private/projectsPOST /api/private/projectsPATCH /api/private/projects/:projectId- D1 read and primary session helpers
- project ownership checks in D1
- repository URL validation
config_pathvalidation- encrypted repository token storage in D1
ProjectDOschema and project-local coordination state- accepted-run ledger and FIFO queue logic
- minimal
RunDOschema for run metadata, steps, and rolling logs GET /api/private/projects/:projectIdGET /api/private/projects/:projectId/runsGET /api/private/runs/:runIdPOST /api/private/projects/:projectId/runs- queue message contract and queue consumer
- platform runner image and
docker/runner.Dockerfile - Sandbox runner
- repository checkout flow
- repository config parsing and validation
- D1
run_indexcreation and terminal update reconciliation
POST /api/private/runs/:runId/cancelPOST /api/private/runs/:runId/log-ticket- authenticated WebSocket upgrade flow for
GET /api/private/runs/:runId/logs RunDOWebSocket Hibernation implementation- rolling tail replay for newly attached viewers
- active-run heartbeat updates from the queue consumer
ProjectDOwatchdog recovery for stale active runs- queue dispatch retry and stale delivery handling
- D1 sync retry for accepted and terminal runs
- cancel flow for pending and active runs
GET /api/private/projects/:projectId/webhooksPUT /api/private/projects/:projectId/webhooks/:providerPOST /api/private/projects/:projectId/webhooks/:provider/rotate-secretDELETE /api/private/projects/:projectId/webhooks/:providerProjectDOwebhook configuration and encrypted secret storage- public webhook ingress route
- provider-specific verification adapters for GitHub, GitLab, and Gitea
- webhook delivery dedupe
- default-branch push trigger policy
Frontend work should begin as soon as the corresponding backend slice exposes stable contracts and routes. The frontend track does not need to wait for the entire backend track to be complete.
- app shell and route structure
- frontend auth wrapper using
localStorage - typed API wrapper consuming shared contracts
- frontend D1 bookmark wrapper using
localStorage - login page
- projects list page
- create project page
- project detail page
- recent runs list
- manual trigger run action
- polling-based run status refresh
- project metadata display for repository URL, default branch, and config path
- queue and active-run summary display
- run detail page
- live log panel
- reconnecting log stream client
- cancel run action
- run state presentation for active, canceling, canceled, failed, and passed runs
- webhook settings UI
- webhook provider summary display
- secret rotation and provider enablement flows
- session rotation policy on privileged operations
- whether logout should blacklist old sessions beyond KV delete
- exact R2 retention policy when archives are added
- whether Workflows should replace the queue consumer or sit above it
- whether local password auth will remain mandatory once OAuth/SAML arrive
anvil v1 should be built around a simple but strong architecture:
- KV for short-lived session state
- D1 for relational control-plane data
- ProjectDO for project-local coordination, accepted-run reconciliation, FIFO run queue, and webhook config
- RunDO for hot run state and log fanout
- WebSocket Hibernation as the default log-stream transport
- Queue + Sandbox for execution on a platform-owned runner image
- repo-defined config from
.anvil.yml
This design keeps each Cloudflare product aligned with the kind of state it handles best, while leaving clean extension points for Workflows, R2 log archiving, and artifacts later.