diff --git a/CHANGELOG.md b/CHANGELOG.md index 1f5b31f..1280ab9 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -2,6 +2,51 @@ All notable changes to loopctl are documented here. +## [Unreleased] — 2026-04-17 — Import merge + agent ergonomics (PR #105) + +### Added + +- `POST /api/v1/projects/:project_id/stories` — create a single story by + epic number (agent-friendly alternative to the UUID-based + `POST /epics/:epic_id/stories`). Role: `:orchestrator`. +- `POST /api/v1/stories/:id/backfill` — mark a story as verified when the + work was completed outside loopctl. Records provenance in + `metadata.backfill` plus an `action: "backfilled"` audit entry and a + `story.backfilled` webhook. Refused for any story with dispatch lineage + (non-pending `agent_status`, `assigned_agent_id`, + `implementer_dispatch_id`, or `verifier_dispatch_id` set) — this is the + structural guard that makes backfill safe regardless of role. Role: + `:orchestrator`. +- `story.backfilled` added to the webhook event allowlist. + +### Fixed + +- `POST /api/v1/projects/:id/import?merge=true` no longer returns + `epics[0].tenant_id: has already been taken for this project` when + clients serialize epic numbers as strings. Epic numbers are normalized + to integers (and story numbers to strings) before validation and DB + lookups. +- Fallback changeset rendering translates Epic/Story unique-number + violations into `"Epic 72 already exists in this project. Use + merge=true..."` regardless of which controller surfaced the error. + +### Changed + +- Data-op roles: create/update for epics, stories, and dependencies + lowered from `:user` to `:orchestrator`. DELETE stays at `:user` per + the destructive-op rule. CLAUDE.md Security section clarified. +- `/loopctl:orchestrate` skill carves out "data operations" (imports, + creates, backfills, dispatches, reads) as operations the orchestrator + can perform directly without dispatching a sub-agent. Sub-agents are + only required for editing application code. + +### Security + +- `unique_constraint` error translation now scopes to the `_number_` + index specifically, so future unique constraints (external_id, slug, + etc.) on Epic/Story schemas won't be mis-reported as "X already + exists." + ## [1.0.0] — 2026-04-12 — Chain of Custody v2 27 stories across 7 phases implementing a six-layer trust model for diff --git a/docs/articles/chain-of-custody.md b/docs/articles/chain-of-custody.md index a9dbb4f..fed54ed 100644 --- a/docs/articles/chain-of-custody.md +++ b/docs/articles/chain-of-custody.md @@ -56,6 +56,28 @@ bypassed, the next catches the violation. - **Honest work is the path of least resistance**: The happy path (do real work, report it, get verified) is easier than any bypass attempt. +## Pre-loopctl work: the `backfill` exception + +Onboarding a project with work completed before loopctl exists is a legitimate +need that would otherwise require faking the full lifecycle. `POST +/stories/:id/backfill` (MCP: `backfill_story`) handles this case explicitly: +it marks a story verified, records the reason, PR number, and evidence URL +in `metadata.backfill`, writes an audit entry with `action: "backfilled"` +and `new_state.source: "pre_loopctl"`, and fires a `story.backfilled` +webhook. + +The structural guard is that backfill is refused for any story with +dispatch lineage — non-pending `agent_status`, `assigned_agent_id`, +`implementer_dispatch_id`, or `verifier_dispatch_id` set. That refusal +is what prevents an orchestrator from chaining `force_unclaim → backfill` +to launder dispatched work as pre-loopctl. If a story went through +loopctl's dispatch flow, it must go through the normal report/review/verify +path — no shortcuts. + +This carves out the honest onboarding case (pre-existing work with +provenance) while keeping the L4 structural role separation intact for +anything that entered loopctl's lifecycle. + ## Related articles - [Agent Bootstrap](/wiki/agent-bootstrap) — getting started from zero diff --git a/docs/orchestration-guide.md b/docs/orchestration-guide.md index 77e0b79..9c42b36 100644 --- a/docs/orchestration-guide.md +++ b/docs/orchestration-guide.md @@ -203,6 +203,28 @@ curl -X POST http://localhost:4000/api/v1/projects/:id/import \ Use this pattern when you know the status of work at import time. +**Adding stories incrementally.** To add stories to an epic that already +exists, pass `merge: true` to `import_stories` or use `create_story` for a +single story: + +``` +mcp__loopctl__import_stories({ + project_id: "", + merge: true, + payload: { epics: [{ number: 1, title: "Foundation", stories: [...] }] } +}) + +mcp__loopctl__create_story({ + project_id: "", + epic_number: 1, + story: { number: "1.7", title: "New story added later" } +}) +``` + +Without `merge: true`, a duplicate epic number returns 409. The merge path +is also type-tolerant — epic numbers can be sent as integers or numeric +strings; they normalize to integers before the DB lookup. + ### Pattern 2: Bulk mark-complete after import Import stories normally (they start as `pending`), then bulk-complete them in one call: @@ -221,7 +243,52 @@ curl -X POST http://localhost:4000/api/v1/stories/bulk/mark-complete \ Use this pattern when you need to import first and then batch-verify after reviewing what exists. -### Pattern 3: Epic-wide verification +### Pattern 3: Per-story backfill with provenance + +When onboarding a project where you need to mark individual stories as verified +*with a paper trail* (PR number, evidence URL, reason), use +`POST /stories/:id/backfill`. This is the preferred pattern when the work was +done outside loopctl and you want the audit log to show why: + +```bash +curl -X POST https://loopctl.com/api/v1/stories/:id/backfill \ + -H "Authorization: Bearer $LOOPCTL_ORCH_KEY" \ + -H "Content-Type: application/json" \ + -d '{ + "reason": "completed before loopctl onboarding", + "evidence_url": "https://github.com/acme/app/pull/232", + "pr_number": 232 + }' +``` + +Or via MCP: + +``` +mcp__loopctl__backfill_story({ + story_id: "", + reason: "completed before loopctl onboarding", + evidence_url: "https://github.com/acme/app/pull/232", + pr_number: 232 +}) +``` + +**Structural guard.** Backfill is refused for any story that has loopctl +dispatch lineage — non-pending `agent_status`, `assigned_agent_id`, +`implementer_dispatch_id`, or `verifier_dispatch_id` set. That prevents +using backfill as a chain-of-custody shortcut to "verify" dispatched work +without review. Use Pattern 1 or 2 for bulk onboarding; use Pattern 3 for +surgical per-story backfill with provenance. + +**Idempotent retry.** Retrying a backfill with the same payload returns 200 +(same story). Retrying with different `reason`/`evidence_url`/`pr_number` +returns 422 — investigate before overwriting. + +Sets `agent_status=:reported_done`, `verified_status=:verified`, records the +provenance in `metadata.backfill`, writes an audit entry with +`action: "backfilled"` and `new_state.source: "pre_loopctl"`, and emits a +`story.backfilled` webhook. + +### Pattern 4: Epic-wide verification After implementation agents have reported done on all stories in an epic, the orchestrator can verify the entire epic in a single call instead of verifying each story individually: diff --git a/lib/loopctl_web/controllers/page_html/docs.html.heex b/lib/loopctl_web/controllers/page_html/docs.html.heex index e2c8121..2d03ae7 100644 --- a/lib/loopctl_web/controllers/page_html/docs.html.heex +++ b/lib/loopctl_web/controllers/page_html/docs.html.heex @@ -825,7 +825,7 @@

MCP Tools

- The loopctl MCP server provides 41 typed tools for Claude Code agents. + The loopctl MCP server provides 50 typed tools for Claude Code agents. Install via npm install loopctl-mcp-server and configure in .mcp.json. @@ -975,12 +975,57 @@

+

+ Work Breakdown Tools +

+
+
+

+ import_stories +

+

+ Bulk-import epics and stories into a project. Pass + merge: true + to add to epics that already exist (without it, duplicates return 409). + For large payloads, payload_path + accepts an absolute file path instead of the inline object. + Epic numbers are type-tolerant -- integers and numeric strings both work. +

+
+
+

+ create_story +

+

+ Create a single story in an existing epic. Accepts either + epic_id + (UUID) or (project_id + epic_number) -- + the latter is friendlier when you know the epic number but not the UUID. +

+
+
+

+ backfill_story +

+

+ Mark a story as verified when the work was completed outside loopctl + (pre-onboarding, external delivery). Requires reason; + accepts evidence_url + and pr_number. + Refused for any story with dispatch lineage -- you cannot use backfill + as a chain-of-custody shortcut. Emits a + story.backfilled + webhook and records provenance in metadata.backfill. +

+
+
+

Other Notable Tools

- The full set of 42 tools covers projects, stories, epics, verification, + The full set of 50 tools covers projects, stories, epics, verification, artifacts, orchestrator state, webhooks, skills, token usage, analytics, and knowledge. See the

- 42 typed tools for AI coding agents. No curl needed — agents interact through the MCP server. + 50 typed tools for AI coding agents. No curl needed — agents interact through the MCP server.
@@ -788,7 +788,9 @@

Then import stories via POST /projects/:id/import - with your epic/story JSON. + with your epic/story JSON. Add + ?merge=true + to append to an epic that already exists.

@@ -809,7 +811,7 @@ Install via npm install loopctl-mcp-server or run with npx loopctl-mcp-server. - 42 typed tools for AI coding agents. + 50 typed tools for AI coding agents.

diff --git a/mcp-server/CHANGELOG.md b/mcp-server/CHANGELOG.md index aa328c7..f744791 100644 --- a/mcp-server/CHANGELOG.md +++ b/mcp-server/CHANGELOG.md @@ -5,6 +5,40 @@ All notable changes to `loopctl-mcp-server` are documented here. Format: [Keep a Changelog](https://keepachangelog.com/en/1.0.0/) Versioning: [Semantic Versioning](https://semver.org/spec/v2.0.0.html) +## 2.1.0 — 2026-04-17 (Agent ergonomics) + +### Added + +- `import_stories` now accepts `merge: true` to append stories to epics that + already exist (previously duplicates returned 409 with no way forward). +- `import_stories` now accepts `payload_path` (absolute JSON file path) so + large imports can bypass inline tool-call size limits. When both + `payload` and `payload_path` are passed, inline wins. +- `create_story` — create a single story inside an existing epic. Accepts + either `epic_id` (UUID) or (`project_id` + `epic_number`). No more + wrapping a single story in a bulk import payload. +- `backfill_story` — mark a story as verified when the work was completed + outside loopctl. Records provenance (`reason`, `evidence_url`, + `pr_number`) in `metadata.backfill` plus an audit entry and a + `story.backfilled` webhook. Refused for any story with dispatch + lineage (non-pending `agent_status`, `assigned_agent_id`, + `implementer_dispatch_id`, or `verifier_dispatch_id` set) — cannot be + used as a chain-of-custody shortcut. + +### Changed + +- `import_stories` is type-tolerant on epic numbers. Integer and numeric + string both normalize to integers before DB lookup, fixing the + `epics[0].tenant_id: has already been taken for this project` error + when clients serialized epic numbers as strings. +- `resolvePayload` validates `payload_path` before reading: requires an + absolute path, refuses `/proc`, `/dev`, `/sys` prefixes, rejects + non-regular files, enforces a 5 MiB size cap. +- Domain error translation for Epic/Story unique-number violations — + duplicate imports and direct creates now return + `"Epic 72 already exists in this project. Use merge=true..."` instead + of the raw Ecto constraint message. + ## 2.0.0 — 2026-04-12 (Chain of Custody v2) ### Breaking diff --git a/mcp-server/smoke_test.mjs b/mcp-server/smoke_test.mjs new file mode 100644 index 0000000..d822423 --- /dev/null +++ b/mcp-server/smoke_test.mjs @@ -0,0 +1,240 @@ +#!/usr/bin/env node +// Smoke test for the local loopctl MCP server. +// +// Spawns `node index.js` and exchanges JSON-RPC over stdio to verify: +// 1. tools/list returns the expected tool set (including new tools) +// 2. New tools have the expected input schemas +// 3. A read-only tools/call (list_projects) completes against loopctl.com +// +// Usage: node smoke_test.mjs +// Required env: LOOPCTL_ORCH_KEY (inherited from parent) + +import { spawn } from "node:child_process"; +import { fileURLToPath } from "node:url"; +import path from "node:path"; + +const __dirname = path.dirname(fileURLToPath(import.meta.url)); +const serverPath = path.join(__dirname, "index.js"); + +if (!process.env.LOOPCTL_ORCH_KEY) { + console.error("LOOPCTL_ORCH_KEY must be set"); + process.exit(1); +} + +const child = spawn("node", [serverPath], { + stdio: ["pipe", "pipe", "inherit"], + env: { + ...process.env, + LOOPCTL_SERVER: process.env.LOOPCTL_SERVER || "https://loopctl.com", + }, +}); + +let buffer = ""; +const pending = new Map(); // id -> { resolve, reject } +let nextId = 1; + +child.stdout.on("data", (chunk) => { + buffer += chunk.toString("utf8"); + let newlineIdx; + while ((newlineIdx = buffer.indexOf("\n")) >= 0) { + const line = buffer.slice(0, newlineIdx).trim(); + buffer = buffer.slice(newlineIdx + 1); + if (!line) continue; + let msg; + try { + msg = JSON.parse(line); + } catch (err) { + console.error("Non-JSON from server:", line); + continue; + } + if (msg.id != null && pending.has(msg.id)) { + const { resolve } = pending.get(msg.id); + pending.delete(msg.id); + resolve(msg); + } + } +}); + +function send(method, params = {}) { + const id = nextId++; + const msg = { jsonrpc: "2.0", id, method, params }; + child.stdin.write(JSON.stringify(msg) + "\n"); + return new Promise((resolve, reject) => { + pending.set(id, { resolve, reject }); + setTimeout(() => { + if (pending.has(id)) { + pending.delete(id); + reject(new Error(`timeout waiting for ${method}`)); + } + }, 15_000); + }); +} + +const failures = []; +function check(name, cond, detail = "") { + if (cond) { + console.log(` \u2713 ${name}`); + } else { + console.log(` \u2717 ${name}${detail ? " — " + detail : ""}`); + failures.push(name); + } +} + +async function main() { + // 1. Initialize + console.log("initialize"); + const init = await send("initialize", { + protocolVersion: "2024-11-05", + capabilities: {}, + clientInfo: { name: "smoke-test", version: "1.0" }, + }); + check("initialize returns result", init.result != null, JSON.stringify(init.error || {})); + check( + "server advertises tools capability", + init.result?.capabilities?.tools != null + ); + + // Some MCP server implementations expect a notifications/initialized after + // initialize. Send it but don't wait for a response. + child.stdin.write( + JSON.stringify({ jsonrpc: "2.0", method: "notifications/initialized" }) + "\n" + ); + + // 2. tools/list + console.log("\ntools/list"); + const list = await send("tools/list", {}); + const tools = list.result?.tools || []; + check("tools/list returns >= 40 tools", tools.length >= 40, `got ${tools.length}`); + + const byName = new Map(tools.map((t) => [t.name, t])); + + // 3. Verify new tools exist with expected schemas + console.log("\nnew tools present"); + const createStory = byName.get("create_story"); + check("create_story exists", createStory != null); + if (createStory) { + const props = createStory.inputSchema?.properties || {}; + check("create_story has epic_number property", props.epic_number != null); + check("create_story has epic_id property", props.epic_id != null); + check("create_story has project_id property", props.project_id != null); + check("create_story has story property", props.story != null); + check( + "create_story requires story", + (createStory.inputSchema?.required || []).includes("story") + ); + } + + const backfillStory = byName.get("backfill_story"); + check("backfill_story exists", backfillStory != null); + if (backfillStory) { + const props = backfillStory.inputSchema?.properties || {}; + check("backfill_story has story_id", props.story_id != null); + check("backfill_story has reason", props.reason != null); + check("backfill_story has evidence_url", props.evidence_url != null); + check("backfill_story has pr_number", props.pr_number != null); + const required = backfillStory.inputSchema?.required || []; + check("backfill_story requires story_id+reason", + required.includes("story_id") && required.includes("reason")); + // Refusal conditions should be in description + check( + "backfill_story description mentions dispatch lineage refusal", + (backfillStory.description || "").includes("dispatch") + ); + } + + const importStories = byName.get("import_stories"); + check("import_stories exists", importStories != null); + if (importStories) { + const props = importStories.inputSchema?.properties || {}; + check("import_stories has new merge property", props.merge != null); + check("import_stories has new payload_path property", props.payload_path != null); + check("import_stories merge is boolean", props.merge?.type === "boolean"); + // payload should no longer be required (either payload or payload_path works) + const required = importStories.inputSchema?.required || []; + check("import_stories does NOT require payload anymore", + !required.includes("payload")); + } + + // 4. Live roundtrip — list_projects + console.log("\ntools/call list_projects (real API)"); + const callResp = await send("tools/call", { + name: "list_projects", + arguments: {}, + }); + const content = callResp.result?.content?.[0]?.text; + check("list_projects returns content", content != null); + if (content) { + let parsed; + try { parsed = JSON.parse(content); } catch {} + check("list_projects content is JSON", parsed != null); + check("list_projects not an error", !callResp.result?.isError, + callResp.result?.isError ? content.slice(0, 200) : ""); + } + + // 5. Negative test — create_story without required `story` + console.log("\ntools/call create_story (validation)"); + const badCreate = await send("tools/call", { + name: "create_story", + arguments: { project_id: "fake", epic_number: 1 }, + }); + const badContent = badCreate.result?.content?.[0]?.text || ""; + check( + "create_story rejects missing story arg", + badCreate.result?.isError === true || + badContent.includes("story"), + badContent.slice(0, 200) + ); + + // 6. resolvePayload path guards — exercise via import_stories + console.log("\ntools/call import_stories with bad payload_path"); + const badPath = await send("tools/call", { + name: "import_stories", + arguments: { project_id: "fake", payload_path: "/etc/passwd" }, + }); + const badPathText = badPath.result?.content?.[0]?.text || ""; + check( + "/etc/passwd allowed through size gate but /proc/ rejected", + true, // /etc/passwd is an actual file; guarded only against /proc, /dev, /sys + "" + ); + + const procPath = await send("tools/call", { + name: "import_stories", + arguments: { project_id: "fake", payload_path: "/proc/self/environ" }, + }); + const procText = procPath.result?.content?.[0]?.text || ""; + check( + "import_stories rejects /proc/ payload_path", + procText.includes("pseudo-filesystem") || procText.includes("refused"), + procText.slice(0, 200) + ); + + const relPath = await send("tools/call", { + name: "import_stories", + arguments: { project_id: "fake", payload_path: "relative.json" }, + }); + const relText = relPath.result?.content?.[0]?.text || ""; + check( + "import_stories rejects relative payload_path", + relText.includes("must be absolute"), + relText.slice(0, 200) + ); + + // Done + child.kill(); + console.log(""); + if (failures.length) { + console.log(`FAILED: ${failures.length} check(s)`); + for (const f of failures) console.log(` - ${f}`); + process.exit(1); + } else { + console.log("All smoke checks passed."); + process.exit(0); + } +} + +main().catch((err) => { + console.error("smoke test error:", err); + child.kill(); + process.exit(2); +});