```mermaid
@@ -148,10 +145,9 @@ This includes development, grants, security audits, research, operational initia
margin="0 0 -1rem 0"
/>
-> _See [LIP-73](https://github.com/livepeer/LIPs/blob/main/LIPs/LIP-0073.md) and [LIP-77](https://github.com/livepeer/LIPs/blob/main/LIPs/LIP-0077.md) for examples_
+> _See [LIP-89](https://github.com/livepeer/LIPs/blob/master/LIPs/LIP-89.md) and [LIP-92](https://github.com/livepeer/LIPs/blob/master/LIPs/LIP-92.md) for treasury mechanics._
Monitor on-chain staking, proposals, and treasury transactions live on the Livepeer Explorer
-{/* When the treasury balance reached a pre‑defined cap, contributions paused; future LIPs can adjust the rate or resume funding. */}
## Governance
The treasury uses the same [governance model & processes](governance-model) as the protocol (though implemented by a separate Governor contract):
diff --git a/v2/about/resources/faq.mdx b/v2/about/resources/faq.mdx
index 5b558eba3..a61de4c2b 100644
--- a/v2/about/resources/faq.mdx
+++ b/v2/about/resources/faq.mdx
@@ -55,7 +55,7 @@ Livepeer operates with multiple actors, both on-chain (protocol) and off-chain (
Livepeer is a protocol and open network for real-time video and AI compute, coordinated through on-chain incentives and off-chain execution.
- Start with [Livepeer Overview](/v2/about/concepts/livepeer-overview) for the full explanation.
+ Start with [Livepeer Overview](/v2/about/concepts/about-livepeer) for the full explanation.
@@ -63,13 +63,13 @@ Livepeer operates with multiple actors, both on-chain (protocol) and off-chain (
The network is the off-chain execution layer. It handles routing, compute, verification, and marketplace behaviour.
- Read [Protocol Overview](/v2/about/protocol/overview) and [Network Overview](/v2/about/network/overview) together for the clearest split.
+ Read [Protocol Overview](/v2/about/protocol/design) and [Network Overview](/v2/about/network/design) together for the clearest split.
LPT is used for staking, delegation, and governance. It is not the token used to pay for ordinary video or AI jobs.
- See [Livepeer Token](/v2/about/protocol/livepeer-token) and [Protocol Economics](/v2/about/protocol/economics).
+ See [Livepeer Token](/v2/about/protocol/livepeer-token) and [Protocol Economics](/v2/about/protocol/livepeer-token).
@@ -81,10 +81,10 @@ Livepeer operates with multiple actors, both on-chain (protocol) and off-chain (
Use the About navigator first, then follow the evaluation reading path.
- Start with [Navigator](/v2/about/navigator), then continue to [Evaluating Livepeer](/v2/about/resources/knowledge-hub/evaluating-livepeer).
+ Start with [Navigator](/v2/about/navigator), then continue to [Evaluating Livepeer](/v2/about/resources/faq).
- Use [Glossary](/v2/about/resources/glossary) for About-specific terminology and [Livepeer Glossary](/v2/resources/glossary) for broader term coverage.
+ Use [Glossary](/v2/about/resources/faq) for About-specific terminology and [Livepeer Glossary](/v2/resources/glossary) for broader term coverage.
diff --git a/v2/about/resources/knowledge-hub/contributor-orientation.mdx b/v2/about/resources/knowledge-hub/contributor-orientation.mdx
index 63817ab74..3b208fa32 100644
--- a/v2/about/resources/knowledge-hub/contributor-orientation.mdx
+++ b/v2/about/resources/knowledge-hub/contributor-orientation.mdx
@@ -32,12 +32,12 @@ This page is being developed as the contributor-focused reading path through the
## Recommended reading order
-1. [Mental Model](/v2/about/concepts/mental-model)
-2. [Network Technical Architecture](/v2/about/network/technical-architecture)
-3. [Job Lifecycle](/v2/about/network/job-lifecycle)
-4. [Protocol Overview](/v2/about/protocol/overview)
+1. [Mental Model](/v2/about/concepts/livepeer-stack)
+2. [Network Technical Architecture](/v2/about/network/architecture)
+3. [Job Lifecycle](/v2/about/network/job-pipelines)
+4. [Protocol Overview](/v2/about/protocol/design)
5. [Blockchain Contracts](/v2/about/protocol/blockchain-contracts)
-6. [Protocol Design Philosophy](/v2/about/protocol/design-philosophy)
+6. [Protocol Design Philosophy](/v2/about/protocol/design)
## What this guide will eventually add
diff --git a/v2/about/resources/knowledge-hub/evaluating-livepeer.mdx b/v2/about/resources/knowledge-hub/evaluating-livepeer.mdx
index ef1603d79..11f567f42 100644
--- a/v2/about/resources/knowledge-hub/evaluating-livepeer.mdx
+++ b/v2/about/resources/knowledge-hub/evaluating-livepeer.mdx
@@ -34,12 +34,12 @@ This page is being developed as the evaluation reading path for the About tab. I
## Recommended reading order
-1. [Livepeer Overview](/v2/about/concepts/livepeer-overview)
-2. [Mental Model](/v2/about/concepts/mental-model)
-3. [Network Overview](/v2/about/network/overview)
-4. [Marketplace](/v2/about/network/marketplace)
-5. [Protocol Overview](/v2/about/protocol/overview)
-6. [Protocol Economics](/v2/about/protocol/economics)
+1. [Livepeer Overview](/v2/about/concepts/about-livepeer)
+2. [Mental Model](/v2/about/concepts/livepeer-stack)
+3. [Network Overview](/v2/about/network/design)
+4. [Marketplace](/v2/about/network/marketplace-model)
+5. [Protocol Overview](/v2/about/protocol/design)
+6. [Protocol Economics](/v2/about/protocol/livepeer-token)
7. [Network Metrics](/v2/about/resources/reference/network-metrics)
## What this guide will eventually add
diff --git a/v2/about/resources/knowledge-hub/gateways-vs-orchestrators.mdx b/v2/about/resources/knowledge-hub/gateways-vs-orchestrators.mdx
index 1d373182a..b23330f4f 100644
--- a/v2/about/resources/knowledge-hub/gateways-vs-orchestrators.mdx
+++ b/v2/about/resources/knowledge-hub/gateways-vs-orchestrators.mdx
@@ -23,6 +23,7 @@ keywords:
'og:image:type': image/png
'og:image:width': 1200
'og:image:height': 630
+status: current
---
---
diff --git a/v2/about/resources/knowledge-hub/livepeer-whitepaper.mdx b/v2/about/resources/knowledge-hub/livepeer-whitepaper.mdx
index fc8479e79..f6938d21b 100644
--- a/v2/about/resources/knowledge-hub/livepeer-whitepaper.mdx
+++ b/v2/about/resources/knowledge-hub/livepeer-whitepaper.mdx
@@ -20,6 +20,7 @@ keywords:
'og:image:width': 1200
'og:image:height': 630
audience: general
+status: current
lastVerified: 2026-03-17T00:00:00.000Z
---
Livepeer’s original whitepaper (published 2017) outlined an ambitious design for a fully decentralised live video streaming network.
diff --git a/v2/about/resources/reference/technical-roadmap.mdx b/v2/about/resources/reference/technical-roadmap.mdx
index 007faefd2..bd8b306a5 100644
--- a/v2/about/resources/reference/technical-roadmap.mdx
+++ b/v2/about/resources/reference/technical-roadmap.mdx
@@ -19,6 +19,7 @@ keywords:
'og:image:width': 1200
'og:image:height': 630
audience: general
+status: current
lastVerified: 2026-03-17T00:00:00.000Z
---
Use these roadmap posts to review the current Livepeer network vision and product direction:
diff --git a/v2/developers/build/byoc.mdx b/v2/developers/build/byoc.mdx
index 7243bf843..884d68bda 100644
--- a/v2/developers/build/byoc.mdx
+++ b/v2/developers/build/byoc.mdx
@@ -28,11 +28,9 @@ audience: developer
lastVerified: 2026-03-17T00:00:00.000Z
status: draft
---
-[//]: # (SCOPE: how_to page. Goal → Prerequisites → Steps → Variants → Related. Assumes reader has a custom AI model or processing function they want to run on the Livepeer network. Does not assume ComfyStream knowledge.)
-
Bring Your Own Container (BYOC) lets you run any custom AI model on the Livepeer network inside your own Docker container. Your container receives a live video (or audio) stream, processes it with your model, and returns the processed output – all over the Livepeer network's trickle streaming protocol.
-BYOC was hardened to production-grade in Phase 4 (January 2026). The Embody SPE and Streamplace are currently running production BYOC workloads.
+BYOC is an advanced operator path. Confirm the current gateway, orchestrator, and PyTrickle support matrix before treating a custom container workflow as production-ready.
If you are building with ComfyUI workflows specifically, see [Build with ComfyStream](/v2/developers/build/comfystream) – ComfyStream is already BYOC-compatible and may be all you need.
@@ -141,8 +139,6 @@ async def main():
await server.run_forever()
```
-[//]: # (REVIEW: Confirm the correct `capability_name` value for the standard Livepeer live-video-to-video pipeline. "live-video-to-video" is the pipeline type from go-livepeer; verify this matches what PyTrickle registers. (Rick / PyTrickle maintainers\))
-
## Step 2 – Define the REST API Contract
@@ -193,7 +189,7 @@ EXPOSE 8000
CMD ["python3", "processor.py"]
```
-[//]: # (REVIEW: Confirm whether there is an official Livepeer base image (e.g., livepeer/ai-runner:base or a PyTrickle image\) that developers should use instead of building from nvidia/cuda. Check ai-runner repo for an official base image or Dockerfile. Rick to confirm.)
+Use the base image recommended by the current PyTrickle or Livepeer AI runtime docs. If no project-specific base image is published for your target path, start from an NVIDIA CUDA runtime image and pin package versions.
Build and test locally:
@@ -282,18 +278,11 @@ The image must be accessible to your orchestrator. Public Docker Hub or any regi
Your BYOC container runs on an orchestrator. The orchestrator pulls your image, starts it, and routes live-video-to-video jobs to it.
-[//]: # (REVIEW: CRITICAL – confirm the exact go-livepeer orchestrator flags for registering a BYOC container. The following is a placeholder structure based on the Phase 4 architecture; exact flags must be verified from go-livepeer CLI reference or Rick.)
-
-To register your container with an orchestrator, you (or the orchestrator you are working with) configure go-livepeer to use BYOC mode and point to your container image:
+To register your container with an orchestrator, you (or the orchestrator you are working with) configure go-livepeer to use the current BYOC registration flow and point to your container image:
```bash icon="terminal"
-# Placeholder — exact flags not confirmed
-# REVIEW: Verify the correct go-livepeer flags for BYOC container registration
-livepeer \
- -orchestrator \
- -byoc \
- -byocImage /:latest \
- # ... Other orchestrator flags
+# Check the current go-livepeer CLI reference for the active BYOC flags.
+livepeer
```
For current orchestrators accepting BYOC workloads, see the [MuxionLabs BYOC example apps](https://github.com/muxionlabs/byoc-example-apps) – these include working deployment configurations that other orchestrators have used.
@@ -322,19 +311,16 @@ await client.startStream({ videoElement, params: { /* your params */ } });
The SDK handles WebRTC streaming from the browser directly to your gateway without requiring a custom backend.
-[//]: # (REVIEW: Confirm SDK install name and basic API from npmjs.com/@muxionlabs/byoc-sdk.)
-
## Variants
### ComfyStream as a BYOC container
-ComfyStream is already integrated with PyTrickle (Phase 4). To run ComfyStream as a BYOC worker, use the `muxionlabs/comfystream` image instead of building from scratch:
+ComfyStream integrates with PyTrickle. To run ComfyStream as a BYOC worker, use the current image and tag recommended by the ComfyStream docs instead of assuming the example below is the active release:
```bash icon="terminal"
-# REVIEW: Confirm the exact muxionlabs/comfystream image name and tag
-docker pull muxionlabs/comfystream:latest
+docker pull :
```
See [Build with ComfyStream](/v2/developers/build/comfystream) for ComfyStream-specific configuration.
diff --git a/v2/developers/build/comfystream.mdx b/v2/developers/build/comfystream.mdx
index 94f43e353..3c1f8b758 100644
--- a/v2/developers/build/comfystream.mdx
+++ b/v2/developers/build/comfystream.mdx
@@ -28,7 +28,6 @@ audience: developer
lastVerified: 2026-03-17T00:00:00.000Z
status: draft
---
-[//]: # (SCOPE: This is a guide page for developers who have completed the ComfyStream quickstart and are deepening their usage. It does not repeat installation or first-run steps.)
all available ComfyStream pipeline modes, the node ecosystem, how to build and load custom workflows, and how to configure output types including video, audio, and data-channel.
@@ -45,11 +44,10 @@ ComfyStream supports four output modalities. Every ComfyStream workflow produces
| **Image-to-image (live)** | Live video frames (webcam or stream) | Transformed video frames | StreamDiffusion sampler | Primary mode for style transfer and generative overlays |
| **Video-to-video** | Video segment | Processed video | StreamDiffusion V2 | Temporal consistency across frames; suited to V2V tasks |
| **Audio processing** | Audio track from stream | Audio (pass-through or transformed) | LoadAudioTensor | Processes audio alongside video in the same workflow |
-| **Data-channel output** | Audio (for transcription) or video frames | Structured text data alongside video | AudioTranscription + data output node | Phase 4 addition; Whisper-based; output via WebRTC data channel |
+| **Data-channel output** | Audio (for transcription) or video frames | Structured text data alongside video | Data output node | Output via WebRTC data channel |
-ComfyStream can serve multiple pipelines in a single container (Phase 4 BYOC addition). Dynamic warm-up allows new pipelines to load mid-stream without restarting the server.
-[//]: # (REVIEW: Confirm "multiple pipelines in single container" framing from docs.comfystream.org or Phase 4 BYOC implementation details. Phase 4 retrospective says "hosting multiple models and disparate workflow/pipelines on one orchestrator in a single container.")
+ ComfyStream deployments can support multiple workflows, but the exact container and warm-up behavior depends on the current ComfyStream release and deployment mode.
@@ -71,8 +69,6 @@ These nodes handle real-time tensor input and output. They are required for Comf
These nodes update their output on every workflow execution – designed specifically for real-time video loops.
-[//]: # (REVIEW: Verify the canonical repo for these nodes. Ryanontheinside/ComfyUI_RealtimeNodes appears to be the primary source. Confirm whether these are officially endorsed for the Livepeer ComfyStream ecosystem or community-maintained.)
-
| Node | Source | Purpose |
|------|--------|---------|
| `FloatControl` | ComfyUI_RealtimeNodes | Outputs a float that changes over time (sine, bounce, random) – use to animate parameters |
@@ -82,12 +78,10 @@ These nodes update their output on every workflow execution – designed specifi
| `IntSequence` | ComfyUI_RealtimeNodes | Cycles through comma-separated integer values |
| Motion detection nodes | ComfyUI_RealtimeNodes | Detects motion between frames; can trigger parameter changes |
-### StreamDiffusion nodes (Phase 4)
+### StreamDiffusion nodes
The primary generative video nodes, ported from Livepeer Inc's Daydream StreamDiffusion pipeline.
-[//]: # (REVIEW: Confirm canonical repo location for these nodes. Phase 4 retrospective says they were "added to the ComfyUI Stream Pack" but the livepeer/ComfyUI-Stream-Pack README shows no nodes added. Pschroedl/ComfyUI-StreamDiffusion is the actual repo found. Rick should confirm the official location.)
-
| Node | Purpose | Notes |
|------|---------|-------|
| `StreamDiffusionCheckpoint` | Loads a StreamDiffusion checkpoint model | Use with SD1.5 or SDXL models |
@@ -98,21 +92,17 @@ The primary generative video nodes, ported from Livepeer Inc's Daydream StreamDi
**StreamDiffusion V2** adds support for video-to-video mode and stable diffusion V2 base models.
-### SuperResolution node (Phase 4)
+### SuperResolution node
Real-time video upscaling. Input: standard-resolution frame; output: upscaled frame. Suitable for adding resolution to low-quality input streams.
-[//]: # (REVIEW: Confirm node name and source repo from Rick / muxionlabs/comfystream.)
-
-### AudioTranscription nodes (Phase 4)
+### AudioTranscription nodes
Whisper-based real-time speech transcription. Two output modes:
- **Video output with SRT subtitles** – captions are burned into the video segments
- **Data-channel text output** – transcript text delivered to the application separately via WebRTC data channel; no visual overlay
-[//]: # (REVIEW: Confirm node names from muxionlabs/comfystream. Phase 4 confirms these were shipped as "AudioTranscription + SRT" node set.)
-
## Custom Workflows
@@ -142,8 +132,7 @@ Workflows saved from ComfyUI in the default format (with UI layout data) will no
- Copy the workflow JSON into the `workflows/` directory inside your ComfyStream workspace. For Docker deployments, mount this directory as a volume.
- [//]: # (REVIEW: Confirm exact path convention from docs.comfystream.org. The workflows/ dir is confirmed from the ComfyStream repo but the precise expected path may differ per deployment mode.)
+ Copy the workflow JSON into the workflow directory expected by your ComfyStream deployment. For Docker deployments, mount that directory as a volume.
@@ -163,14 +152,13 @@ cd ComfyUI/custom_nodes/
pip install -r requirements.txt
```
-For ComfyStream Docker deployments, Phase 4 added a config-based method to specify which custom node subsets are included in the container build:
-[//]: # (REVIEW: Confirm the exact config mechanism from docs.comfystream.org. Phase 4 retrospective says "a simple, config based method to allow for developing and deploying workflows using custom nodes which have different underlying python package requirements.")
+For ComfyStream Docker deployments, use the current ComfyStream configuration mechanism to specify which custom node subsets and Python dependencies are included in the container build.
## Data-Channel Output
-The data-channel output type (Phase 4) allows ComfyStream to produce structured text data alongside video – without requiring it to be embedded in the video frames.
+The data-channel output type allows ComfyStream to produce structured text data alongside video without requiring it to be embedded in the video frames.
**Use cases:**
- Real-time audio transcription delivered as text to a downstream application
@@ -181,9 +169,7 @@ The data-channel output type (Phase 4) allows ComfyStream to produce structured
ComfyStream extends the WebRTC connection with a data channel. When the workflow contains a data output node, the text output is sent over the data channel to the browser or application that has connected to the ComfyStream server.
-To receive data-channel output from the client side, use `@muxionlabs/byoc-sdk`, which provides data-channel support alongside WebRTC video streaming.
-
-[//]: # (REVIEW: Confirm the exact data-output node name and wiring pattern from muxionlabs/comfystream or Rick. The Phase 4 retrospective confirms the capability but does not name the exact node or API.)
+To receive data-channel output from the client side, use the current ComfyStream or BYOC client library recommended by the project docs.
@@ -199,28 +185,25 @@ ComfyStream compiles TensorRT engines and runs `torch.compile` on model componen
### Frame rate and throughput
-Achievable frame rate depends on model complexity, GPU, and image resolution. Reference figures (from community testing, RTX 4090):
-
-- SD1.5 + DMD one-step + DepthControlNet workflow: ~14–15 fps at 640×360 input
-- StreamDiffusion with TensorRT: higher throughput at same resolution (exact figures vary by LoRA and ControlNet load)
+Achievable frame rate depends on model complexity, GPU, image resolution, TensorRT availability, and workflow design. Treat any benchmark as deployment-specific.
-[//]: # (REVIEW: Verify these reference figures from docs.comfystream.org benchmark section or an official Livepeer performance report. Current figures are from community gists – confirm before publication.)
+- Start with lower resolutions while tuning a workflow.
+- Keep model weights warm on the target GPU before measuring interactive latency.
+- Benchmark the exact workflow, LoRAs, ControlNets, and output mode you plan to run.
-### Dynamic warm-up (Phase 4)
+### Dynamic warm-up
-ComfyStream now supports dynamic warm-up, allowing new workflows to load mid-stream without restarting the server. This enables:
+ComfyStream deployments can support dynamic warm-up, allowing workflows to load without a full server rebuild when configured for that mode. This enables:
- Multi-model hosting on a single orchestrator container
- Hot-swap between workflows on demand
### Configuration parameters
-[//]: # (REVIEW: Extract the complete set of configurable server parameters from docs.comfystream.org or the comfystream server/app.py source. The following are confirmed from README and Phase 4 but without official defaults.)
-
| Parameter | How to set | Effect | Default |
|-----------|-----------|--------|---------|
| `--workspace` | CLI flag to `server/app.py` | Path to ComfyUI workspace directory | Required |
| `--media-ports` | CLI flag | Comma-delimited UDP port range for WebRTC | 1024–65535 |
-| Port | `docker run -p` or `--port` | Server port | [//]: # (REVIEW: Confirm default port) |
+| Port | `docker run -p` or current server flag | Server port | Check current release |
diff --git a/v2/developers/build/model-support.mdx b/v2/developers/build/model-support.mdx
index 608edda27..be51e68af 100644
--- a/v2/developers/build/model-support.mdx
+++ b/v2/developers/build/model-support.mdx
@@ -46,10 +46,10 @@ These pipelines accept a request, process it, and return a result. They use the
|----------|-------------|------------------------|------------|----------|--------|
| **Text to image** | `POST /text-to-image` | Stable Diffusion XL (SDXL), SD 1.5, Flux | `SG161222/RealVisXL_V4.0_Lightning` | 24 GB | Beta |
| **Image to image** | `POST /image-to-image` | Instruct-Pix2Pix, SDXL img2img, SD 1.5 | `timbrooks/instruct-pix2pix` | 20 GB | Beta |
-| **Image to video** | `POST /image-to-video` | Stable Video Diffusion (SVD, SVD-XT) | `stabilityai/stable-video-diffusion-img2vid-xt` | [//]: # (REVIEW: VRAM not confirmed from Livepeer published source. SVD-XT runs on A100 80GB per HF model card; Livepeer-published minimum not retrieved. Confirm from docs.livepeer.org/ai/pipelines/image-to-video) | Beta |
+| **Image to video** | `POST /image-to-video` | Stable Video Diffusion (SVD, SVD-XT) | `stabilityai/stable-video-diffusion-img2vid-xt` | Provider-dependent | Beta |
| **Image to text** | `POST /image-to-text` | BLIP, BLIP-2, vision-language models | `Salesforce/blip-image-captioning-large` | 4 GB | Beta |
| **Audio to text** | `POST /audio-to-text` | Whisper (OpenAI) | `openai/whisper-large-v3` | 12 GB | Beta |
-| **Text to speech** | `POST /text-to-speech` | [//]: # (REVIEW: Warm model not confirmed – likely Bark or XTTS based on Diffusers TTS pipeline conventions. Verify from docs.livepeer.org/ai/pipelines/text-to-speech) | [//]: # (REVIEW) | 12 GB | Beta |
+| **Text to speech** | `POST /text-to-speech` | Pipeline-specific TTS model | Provider-configured | 12 GB | Beta |
| **Upscale** | `POST /upscale` | SD x4-Upscaler (4× super-resolution) | `stabilityai/stable-diffusion-x4-upscaler` | 24 GB | Beta |
| **Segment Anything 2** | `POST /segment-anything-2` | SAM 2 (Meta AI) | `facebook/sam2-hiera-large` | 6 GB | Beta |
| **LLM** | `POST /llm` | Any Ollama-compatible model (Llama, Mistral, Gemma, Qwen, …) | `meta-llama/Meta-Llama-3.1-8B-Instruct` | 8 GB | Beta |
@@ -66,7 +66,7 @@ These pipelines accept a request, process it, and return a result. They use the
**Image to text** – Returns a text caption. Accepts an optional prompt to guide the caption content.
-**Audio to text** – Returns a full transcript with per-chunk timestamps. Accepts audio files up to [//]: # (REVIEW: confirm max file size from api-runner or gateway docs). Uses Whisper-large-v3 as the default warm model.
+**Audio to text** – Returns a full transcript with per-chunk timestamps. File-size limits are gateway-dependent. Uses Whisper-large-v3 as the default warm model.
**Text to speech** – Requires a pipeline-specific AI Runner container. Standard ai-runner image does not include this pipeline; orchestrators must opt in.
@@ -82,7 +82,7 @@ These pipelines process live video streams frame-by-frame. They use the trickle
| Pipeline | Transport | Supported models | Min VRAM | Status |
|----------|-----------|-----------------|----------|--------|
-| **live-video-to-video** (Cascade) | Trickle / WebRTC | Any ComfyUI-compatible model; StreamDiffusion, SDXL, ControlNets, LoRAs, SuperResolution, Whisper (audio), Gemma (video understanding) | [//]: # (REVIEW: Variable by workflow. StreamDiffusion + SD1.5 one-step: community-reported 8–12GB. SDXL + TensorRT: 16–24GB. Confirm minimum supported config from docs.comfystream.org or Rick.) | Beta |
+| **live-video-to-video** (Cascade) | Trickle / WebRTC | Any ComfyUI-compatible model; StreamDiffusion, SDXL, ControlNets, LoRAs, SuperResolution, Whisper (audio), Gemma (video understanding) | Workflow-dependent | Beta |
The live-video-to-video pipeline is served by [ComfyStream](/v2/developers/build/comfystream). The pipeline type in go-livepeer is `live-video-to-video`. It is not accessible via the standard AI Jobs API – it requires a real-time connection to a gateway that has this pipeline enabled.
@@ -116,9 +116,7 @@ See [Bring Your Own Container](/v2/developers/build/byoc) for the full implement
- Request a specific model via `model_id` and coordinate with your orchestrator to keep it warm
- For production workloads requiring consistent latency, run your own gateway and orchestrator with your target model pre-loaded
-**Warm model signalling:** Orchestrators advertise their warm models to the gateway. When you request a specific `model_id`, the gateway routes your job to an orchestrator that has that model warm. If none does, the request is held until a cold-start load completes or times out.
-
-[//]: # (REVIEW: Confirm the timeout behaviour – does the gateway return an error immediately if no warm orchestrator exists, or does it wait? Verify from go-livepeer gateway source or Rick.)
+**Warm model signalling:** Orchestrators advertise their warm models to the gateway. When you request a specific `model_id`, the gateway can route your job to an orchestrator that has that model warm. Timeout behavior is gateway-dependent.
diff --git a/v2/developers/build/workload-fit.mdx b/v2/developers/build/workload-fit.mdx
index aa79e533c..2b4c5f0af 100644
--- a/v2/developers/build/workload-fit.mdx
+++ b/v2/developers/build/workload-fit.mdx
@@ -9,6 +9,7 @@ complexity: intermediate
audience: developer
purpose: choose
pageType: concept
+status: current
lastVerified: "2026-03-17"
keywords:
- livepeer
@@ -225,13 +226,13 @@ Many batch and file-based AI workloads are technically runnable on Livepeer. How
## Next steps
-
+
ComfyStream and BYOC - how to build and deploy inference pipelines.
-
+
Bring your own container: run custom models on the network.
-
+
Full compatibility matrix - which model families run on Livepeer.
diff --git a/v2/developers/concepts/ai-on-livepeer.mdx b/v2/developers/concepts/ai-on-livepeer.mdx
index 2136d4dbd..a1ba91639 100644
--- a/v2/developers/concepts/ai-on-livepeer.mdx
+++ b/v2/developers/concepts/ai-on-livepeer.mdx
@@ -150,12 +150,10 @@ Real-time AI on Livepeer is built around the `live-video-to-video` pipeline type
The infrastructure model differs from batch processing in four ways:
- **Connection:** Persistent WebRTC or RTMP stream, not request/response
-- **Billing:** Per second of compute, not per pixel or per output
+- **Billing:** Provider-dependent; real-time workloads are usually priced around stream duration or reserved compute rather than a single returned asset
- **GPU assignment:** Dedicated to your stream for its full duration
- **Output:** Continuous frame-by-frame results, not a single returned asset
-{/* REVIEW: Confirm exact billing unit with Rick/Mehrdad -- per-second confirmed in go-livepeer CHANGELOG but exact developer-facing pricing model needs verification. */}
-
**ComfyStream** is the primary tool for building real-time AI pipelines on Livepeer. It is an open-source ComfyUI plugin (`github.com/livepeer/comfystream`) that turns ComfyUI's node-graph workflow editor into a real-time inference engine for live video. Daydream is built on ComfyStream -- if you are using the Daydream API, you are already running on this infrastructure.
### Use cases
@@ -196,9 +194,7 @@ The LLM pipeline brings text inference to the Livepeer network using an Ollama-b
The LLM pipeline runs on a wider range of GPU hardware than diffusion-based batch pipelines. An orchestrator needs as little as 8 GB of VRAM to serve LLM workloads, making it accessible to a larger pool of network participants.
-{/* REVIEW: Confirm LLM pipeline production status vs beta with Rick or Mehrdad before removing this note. Referenced in go-livepeer CHANGELOG as updated to OpenAI-compatible format. */}
-
-The **LLM SPE** built and maintains this pipeline. The **Cloud SPE** provides managed gateway access to it, making decentralised LLM inference available at `https://livepeer.studio/api/beta/generate/llm` with a Studio API key and no infrastructure setup.
+The **LLM SPE** built and maintains this pipeline. The **Cloud SPE** provides managed gateway access to it through the beta generate endpoint at `https://livepeer.studio/api/beta/generate/llm` with a Studio API key and no infrastructure setup.
### Working with the LLM pipeline
diff --git a/v2/developers/concepts/builders.mdx b/v2/developers/concepts/builders.mdx
index 47b98dd41..a39bfbf96 100644
--- a/v2/developers/concepts/builders.mdx
+++ b/v2/developers/concepts/builders.mdx
@@ -40,7 +40,7 @@ import { CenteredContainer } from '/snippets/components/wrappers/containers/Cont
The Livepeer ecosystem spans official protocol infrastructure, developer tooling, AI pipeline runtimes, creative applications, operator utilities, and analytics dashboards. This page collects everything in one place so you can find what you need without searching across GitHub, the Forum, and Discord separately.
-For the official SDK and API reference, see . For contribution paths and funded work, see .
+For the official SDK and API reference, see . For contribution paths and funded work, see .
@@ -79,14 +79,14 @@ cd storyboard && npm install && npm run dev
The inference runtime that runs inside Livepeer orchestrator nodes. Handles both batch and real-time AI pipelines. If you are building a BYOC (Bring Your Own Container) deployment or a custom AI pipeline, `ai-runner` is the execution environment your container integrates with. 24 stars, 32 forks, the most actively developed AI-specific repository in the org.
-See for BYOC integration guides.
+See for BYOC integration guides.
---
### ComfyStream [Official]
**Repo**: [github.com/livepeer/comfystream](https://github.com/livepeer/comfystream)
-**Docs**: [docs.comfystream.org](https://docs.comfystream.org) {/* REVIEW: confirm docs URL */}
+**Docs**: [docs.comfystream.org](https://docs.comfystream.org)
A ComfyUI custom node that runs real-time media workflows as a live streaming backend. ComfyUI pipelines become real-time video-to-video processors served through the Livepeer AI subnet. Used as the reference implementation for real-time AI pipeline development on the network.
@@ -128,7 +128,7 @@ The official Livepeer network portal. A micro-frontend shell that loads independ
Build and publish your own plugin using the `@naap/plugin-sdk` CLI. Full quickstart, architecture docs, API reference, and 8 AI prompt templates for plugin development are available in the NaaP docs.
-See for the full overview.
+See for the full overview.
---
@@ -142,7 +142,7 @@ Hosted identity, billing, and payment signing infrastructure for Livepeer-powere
Built by [@eliteprox](https://github.com/eliteprox) (John), a Livepeer orchestrator operator and go-livepeer contributor. Integrates with NaaP's Developer API Manager as a billing provider via OAuth.
-See for the full overview.
+See for the full overview.
---
@@ -338,10 +338,10 @@ Python scripts for delegators and orchestrators to calculate earnings, rewards,
## Related pages
-
+
Build and publish plugins for the official Livepeer network portal.
-
+
Identity, billing, and payment signing for Livepeer app developers.
diff --git a/v2/developers/concepts/developer-stack.mdx b/v2/developers/concepts/developer-stack.mdx
index 72efb809f..705f198cb 100644
--- a/v2/developers/concepts/developer-stack.mdx
+++ b/v2/developers/concepts/developer-stack.mdx
@@ -68,7 +68,7 @@ Studio and Daydream are separate products built **on top of** the Livepeer Netwo
-Every component of the Livepeer Network is built and maintained in the open. This page covers the repositories most relevant to developers building on, contributing to, or integrating with the Livepeer stack. Repos are grouped by the layer of the stack they belong to, following the same model described in .
+Every component of the Livepeer Network is built and maintained in the open. This page covers the repositories most relevant to developers building on, contributing to, or integrating with the Livepeer stack. Repos are grouped by the layer of the stack they belong to, following the same model described in .
Repositories marked **[Official]** are maintained under the `livepeer/` GitHub organisation. Repositories marked **[Community]** are maintained by ecosystem contributors outside the org.
@@ -168,7 +168,7 @@ Direct inference access, operator portals, and identity and billing infrastructu
AI Gateway API [Official]
REST API for direct AI inference access on the Livepeer Network. Two hosted providers: the Livepeer Studio Gateway (production) and the Livepeer Cloud Community Gateway (free, for experimentation). No Studio account required for the community Gateway.
-
- [AI quickstart](/v2/developers/get-started)
+ [AI quickstart](/v2/developers/get-started/ai-quickstart)
[Livepeer/naap](https://github.com/livepeer/naap) [Official]
@@ -435,10 +435,10 @@ The stack is a dependency chain: every layer above depends on the layers below i
Contribution paths for each repo -- where to start for your first PR.
-
+
Build and publish a plugin for the official Livepeer Network portal.
-
+
Active projects, applications, and tools across the ecosystem.
@@ -450,7 +450,7 @@ The stack is a dependency chain: every layer above depends on the layers below i
| Studio API | Hosted video infrastructure (livestream, transcode, VOD, player) | Video streaming apps, media platforms | [Studio docs](https://docs.livepeer.studio) |
| Daydream API | Hosted real-time AI video API, built on Livepeer's AI network | Real-time AI effects, interactive video, world model apps | [Daydream docs](https://pipelines.livepeer.org) |
| AI Gateway API | Direct REST access to the Livepeer AI inference network | Custom AI apps, self-hosted cost control, non-Daydream workflows | [AI quickstart](/v2/developers/get-started/ai-quickstart) |
-| ComfyStream | Open-source ComfyUI plugin for real-time AI video pipelines | Custom AI workflows, VTubing, generative effects, BYOC deployments | [ComfyStream quickstart](/v2/developers/get-started/ComfyStream-quickstart) |
+| ComfyStream | Open-source ComfyUI plugin for real-time AI video pipelines | Custom AI workflows, VTubing, generative effects, BYOC deployments | [ComfyStream quickstart](/v2/developers/get-started/comfystream-quickstart) |
| Protocol layer | go-livepeer, Solidity contracts, ai-worker | Contributing to the network, running nodes, custom pipeline types | [OSS stack](/v2/developers/concepts/oss-stack) |
@@ -461,13 +461,11 @@ Studio is a full-featured, hosted video platform for developers operated by Live
Studio is the right choice if your goal is to add video streaming or transcoding to an application and you want a managed service with standard API-key auth, dashboard access, and predictable pricing.
-What you give up: you have no control over which Orchestrators process your video, cannot deploy custom AI inference workflows through Studio, and are bound to Studio's feature set instead of the full protocol surface. Costs are lower than traditional cloud providers (up to 80–90% savings) but higher than running your own Gateway at scale.
+What you give up: you have no control over which Orchestrators process your video, cannot deploy custom AI inference workflows through Studio, and are bound to Studio's feature set instead of the full protocol surface. Check the current Studio pricing page for the active hosted-service price model.
**SDKs:** TypeScript/JavaScript (`@livepeer/sdk`), Go, Python, and others – [Studio SDK docs](https://docs.livepeer.studio)
**Auth:** API key from the [Studio dashboard](https://livepeer.studio)
-[//]: # (REVIEW: Confirm whether Studio exposes any AI endpoints (e.g. Content moderation, clip generation\) – if so, the above description needs qualification)
-
## Daydream
@@ -481,13 +479,13 @@ Daydream is the right choice if you want to build real-time AI video application
**Docs:** [pipelines.livepeer.org](https://pipelines.livepeer.org)
**Community:** [Discord – Daydream community](https://discord.gg/livepeer)
-[//]: # (REVIEW: Confirm whether the "Daydream API" requires a separate account/key from the Studio Gateway, or whether it shares Studio auth. Clarify with Peter or Joseph.)
+Check the Daydream docs for the current account and API-key requirements before building against the hosted API.
## AI Gateway API
-The Livepeer AI Gateway API is the underlying REST API that powers AI inference on the Livepeer Network. Any application can call it directly – you do not need to go through Daydream or Studio to use it. Multiple hosted Gateway providers are available in the ecosystem, including the Livepeer Studio Gateway (production-ready) and the Livepeer Cloud Community Gateway (free, for experimentation).
+The Livepeer AI Gateway API is the REST API surface for AI inference on the Livepeer Network. Any application can call a gateway directly when it has the right endpoint and credentials. Hosted gateway providers can differ in authentication, supported pipelines, pricing, and availability.
The API accepts standard JSON over HTTP and supports pipelines including text-to-image, image-to-image, image-to-video, upscaling, audio-to-text, and more. For developers who want full control over which Gateway they use, or who want to self-host a Gateway node for cost savings, this is the direct access path.
@@ -497,11 +495,8 @@ Daydream is one product built on the AI Gateway API. If you are using the Daydre
As your usage scales, the natural next step is running your own Gateway – reducing per-inference costs and gaining full control over Orchestrator selection. This is the graduation path described in the [Nov 2025 Network Vision](https://blog.livepeer.org/a-real-time-update-to-the-livepeer-network-vision/): just as companies move from Heroku to AWS to their own infrastructure as they scale, developers move from hosted Gateway access to self-hosted Gateways as their usage and requirements grow.
-**API reference:** [docs.livepeer.org/ai/api-reference](/v2/Gateways/resources/reference/technical/api-reference/AI-API/ai)
-**Available Gateways:** [docs.livepeer.org/ai/builders/Gateways](/v2/Gateways/guides/operator-considerations/production-Gateways)
-
-[//]: # (REVIEW: Confirm exact endpoint URL for the hosted Studio Gateway AI API (separate from Studio video API\). Clarify with Rick or Mehrdad.)
-[//]: # (REVIEW: Confirm whether "AI Jobs API" is deprecated terminology – current evidence says yes but needs go-livepeer README confirmation.)
+**API reference:** [docs.livepeer.org/ai/api-reference](/v2/gateways/resources/reference/technical/api-reference/AI-API/ai)
+**Available Gateways:** [docs.livepeer.org/ai/builders/Gateways](/v2/gateways/guides/operator-considerations/production-gateways)
@@ -527,8 +522,6 @@ Building at the protocol layer means running and operating network infrastructur
**Primary repos:** `livepeer/go-livepeer`, `livepeer/protocol`, `livepeer/ai-worker`, `livepeer/comfystream`
-[//]: # (REVIEW: Verify go-livepeer README developer-facing API surface description – not directly checked in this research session due to domain restrictions.)
-
## Choosing Your Layer
@@ -538,8 +531,8 @@ Building at the protocol layer means running and operating network infrastructur
| Add video streaming or transcoding to an app | Studio API | [Studio quickstart](https://docs.livepeer.studio) |
| Add real-time AI video effects to an app | Daydream API | [Daydream quickstart](https://pipelines.livepeer.org) |
| Call the AI inference API directly, choose your Gateway provider | AI Gateway API | [AI quickstart](/v2/developers/get-started/ai-quickstart) |
-| Build custom AI video pipelines with ComfyUI workflows | ComfyStream | [ComfyStream quickstart](/v2/developers/get-started/ComfyStream-quickstart) |
-| Reduce inference costs by self-hosting a Gateway | Self-hosted Gateway | [When to run your own Gateway](/v2/developers/concepts/running-a-Gateway) |
+| Build custom AI video pipelines with ComfyUI workflows | ComfyStream | [ComfyStream quickstart](/v2/developers/get-started/comfystream-quickstart) |
+| Reduce inference costs by self-hosting a Gateway | Self-hosted Gateway | [When to run your own Gateway](/v2/developers/concepts/running-a-gateway) |
| Contribute to the network, run nodes, or build new pipeline types | Protocol layer | [OSS stack](/v2/developers/concepts/oss-stack) |
diff --git a/v2/developers/concepts/ecosystem-map.mdx b/v2/developers/concepts/ecosystem-map.mdx
index 1e2dfeb24..f8f093e9a 100644
--- a/v2/developers/concepts/ecosystem-map.mdx
+++ b/v2/developers/concepts/ecosystem-map.mdx
@@ -29,10 +29,10 @@ complexity: beginner
lifecycleStage: discover
---
-import { LinkArrow } from '/snippets/components/primitives/links.jsx'
-import { StyledTable, TableRow, TableCell } from '/snippets/components/layout/tables.jsx'
-import { CustomDivider } from '/snippets/components/primitives/divider.jsx'
-import { CenteredContainer } from '/snippets/components/layout/containers.jsx'
+import { LinkArrow } from '/snippets/components/elements/links/Links.jsx'
+import { StyledTable, TableRow, TableCell } from '/snippets/components/displays/tables/Tables.jsx'
+import { CustomDivider } from '/snippets/components/elements/spacing/Divider.jsx'
+import { CenteredContainer } from '/snippets/components/wrappers/containers/Containers.jsx'
The Livepeer GitHub organisation hosts 173 public repositories. This page maps the ones that matter for developers – organised by stack layer, with the role each repo plays and where to start.
diff --git a/v2/developers/concepts/role.mdx b/v2/developers/concepts/role.mdx
index 22208195a..2037b72d5 100644
--- a/v2/developers/concepts/role.mdx
+++ b/v2/developers/concepts/role.mdx
@@ -2,8 +2,73 @@
title: Developer Role in the Livepeer Ecosystem
sidebarTitle: Role
description: "An overview of the different roles in the Livepeer ecosystem, including developers, builders, operators"
+status: current
---
-{/* TO DO */}
+import { StyledTable, TableRow, TableCell } from '/snippets/components/displays/tables/Tables.jsx'
+import { CustomDivider } from '/snippets/components/elements/spacing/Divider.jsx'
-The Role
+Developers can use Livepeer at several layers. The right role depends on whether you want a managed API, a real-time AI workflow, a custom model runtime, or protocol-level contribution.
+
+Start with the highest layer that fits your use case. Move lower only when you need control over routing, runtime behavior, or protocol code.
+
+
+
+## Choose your role
+
+
+
+
+ Role
+ You are building
+ Start here
+
+
+
+
+ **Video API developer**
+ Livestreaming, VOD upload, transcoding, playback, or access control.
+ [Video on Livepeer](/v2/developers/concepts/video-on-livepeer)
+
+
+ **AI API developer**
+ Batch AI inference from an application without operating GPUs.
+ [API reference](/v2/developers/resources/reference/apis)
+
+
+ **Real-time AI developer**
+ Live video effects, streaming inference, ComfyUI workflows, or low-latency media pipelines.
+ [Workload fit](/v2/developers/build/workload-fit)
+
+
+ **Custom runtime developer**
+ A containerised model or pipeline that needs to run on operator infrastructure.
+ [BYOC guide](/v2/developers/build/byoc)
+
+
+ **Protocol contributor**
+ go-livepeer, protocol contracts, AI runtime code, gateway behavior, or developer tooling.
+ [OSS stack](/v2/developers/concepts/oss-stack)
+
+
+
+
+
+
+## How to decide
+
+Use Studio APIs when you want the fastest path to a product and can rely on the managed gateway. Use ComfyStream or BYOC when the workload needs real-time frame processing, a custom model, or a container that cannot be represented by the hosted API surface.
+
+Run a gateway or contribute to gateway code when your product needs custom routing, authentication, pricing, retries, or orchestration policy. Work at the protocol layer when the change affects staking, rewards, ticket settlement, governance, contract addresses, or node behavior.
+
+
+
+ Compare the main build paths and choose the right entry point.
+
+
+ Check whether a real-time AI workload belongs on Livepeer.
+
+
+ Set up the OSS workflow and contribution process.
+
+
diff --git a/v2/developers/concepts/video-on-livepeer.mdx b/v2/developers/concepts/video-on-livepeer.mdx
index df0635dcb..db38e285f 100644
--- a/v2/developers/concepts/video-on-livepeer.mdx
+++ b/v2/developers/concepts/video-on-livepeer.mdx
@@ -228,16 +228,10 @@ export const VideoPlayer = ({ playbackId }: { playbackId: string }) => (
## Related pages
-
- Create your first stream and test end-to-end in 15 minutes.
+
+ Compare the maintained Livepeer API surfaces before choosing an integration path.
-
- Understand all three access layers and when to use Studio vs a self-hosted gateway.
-
-
- Gate content with JWTs or webhook-based authorisation.
-
-
- Subscribe to stream and asset events for event-driven applications.
+
+ Map the current SDK families to video, AI, and player use cases.
diff --git a/v2/developers/get-started/comfystream-quickstart.mdx b/v2/developers/get-started/comfystream-quickstart.mdx
index e2e5ac912..cbddc6215 100644
--- a/v2/developers/get-started/comfystream-quickstart.mdx
+++ b/v2/developers/get-started/comfystream-quickstart.mdx
@@ -28,7 +28,6 @@ audience: developer
lastVerified: 2026-03-17T00:00:00.000Z
status: draft
---
-[//]: # (REVIEW: OQ-D1 unresolved. This draft treats ComfyStream as primarily a standalone tool (no Livepeer account required for the core tutorial\). The Livepeer connection is covered in a separate section as the next step. Confirm with Peter/Rick before publishing whether this framing is correct, or whether a specific Livepeer network onboarding flow should be mandatory.)
By the end of this quickstart, you will have a ComfyStream instance running with a real-time AI effect applied to a live video feed. Once you have a working pipeline, the final section shows how to connect it to the Livepeer network.
@@ -44,11 +43,10 @@ ComfyStream processes live video using ComfyUI workflows on a local or cloud GPU
ComfyStream requires an NVIDIA GPU. Windows and macOS are not supported for the server component.
-[//]: # (REVIEW: Confirm Linux-only from docs.comfystream.org or Rick. The README does not explicitly state OS, but PyTorch + CUDA dependency strongly implies Linux/NVIDIA only.)
**Prerequisites across all paths:**
-- NVIDIA GPU with sufficient VRAM [//]: # (REVIEW: Verify minimum VRAM from docs.comfystream.org hardware section. Likely 12–16 GB for StreamDiffusion; 24 GB recommended for real-time performance.)
+- NVIDIA GPU with enough VRAM for your selected workflow
- A modern browser (Chrome or Firefox) for the ComfyStream UI
@@ -62,7 +60,7 @@ ComfyStream requires an NVIDIA GPU. Windows and macOS are not supported for the
Open the [livepeer-comfystream RunPod template](https://runpod.io/console/deploy?template=w01m180vxx&ref=u8tlskew) and select a GPU pod.
- For StreamDiffusion workflows, select a GPU with at least [//]: # (REVIEW: confirm VRAM) VRAM. An RTX A4000 or A40 is a reasonable starting point.
+ For StreamDiffusion workflows, select a GPU with enough VRAM for the workflow and resolution you plan to run. An RTX A4000 or A40 is a reasonable starting point.
Click **Deploy**.
@@ -73,7 +71,6 @@ ComfyStream requires an NVIDIA GPU. Windows and macOS are not supported for the
ComfyStream exposes two ports:
- `8188` – ComfyUI interface
- `8889` – ComfyStream WebRTC server
- [//]: # (REVIEW: Confirm exact ports from docs.comfystream.org or docker-compose.yml in the repo. These are based on standard ComfyUI port + common ComfyStream server port.)
@@ -91,7 +88,6 @@ ComfyStream requires an NVIDIA GPU. Windows and macOS are not supported for the
```bash icon="terminal"
docker pull livepeer/comfystream
```
- [//]: # (REVIEW: Verify the current tag recommendation. If there is a pinned release tag, use that rather than latest.)
@@ -103,7 +99,6 @@ ComfyStream requires an NVIDIA GPU. Windows and macOS are not supported for the
```
For full installation options including volume mounts and workspace configuration, see the [ComfyStream install docs](https://docs.comfystream.org/technical/get-started/install).
- [//]: # (REVIEW: Confirm exact docker run flags from docs.comfystream.org / docker-compose.yml. The above is a minimal illustration only – actual flags may differ (workspace path, model path mounts\).)
@@ -139,7 +134,6 @@ ComfyStream requires an NVIDIA GPU. Windows and macOS are not supported for the
- [//]: # (REVIEW: Confirm the recommended starter model and download command from the repo's scripts/README.md. The repo has a `setup_models.py` script that handles model downloads.)
```bash icon="terminal"
python src/comfystream/scripts/setup_models.py --workspace /path/to/ComfyUI
```
@@ -167,8 +161,7 @@ ComfyStream uses ComfyUI workflow JSON files. The repository includes multiple s
In the ComfyStream UI, click the workflow selector and choose a workflow file.
- For your first run, use the StreamDiffusion SD 1.5 workflow – it is the lightest and fastest to compile.
- [//]: # (REVIEW: Confirm the recommended starter workflow filename from the repo's workflows/ directory. Likely something like `streamdiffusion_sd15.json`.)
+ For your first run, choose the lightest starter workflow available in your ComfyStream checkout.
@@ -214,13 +207,11 @@ For the Daydream API, request access at [daydream.live](https://daydream.live).
For the BYOC path, the integration layer is [PyTrickle](https://github.com/livepeer/pytrickle) – a Python package that enables ComfyStream to register as a Livepeer AI worker. See the [BYOC documentation](/v2/developers/build/byoc) for setup steps.
-[//]: # (REVIEW: OQ-D1 – confirm whether this two-path framing is correct with Peter/Rick. Also confirm whether the Livepeer community gateway (free endpoint\) can serve as a simple test target for new developers before going full BYOC. If so, add a step here showing that connection.)
-
## What You Can Build
-ComfyStream supports the following pipeline types in production (Phase 4, January 2026):
+ComfyStream supports the following pipeline types, subject to the current release and gateway/operator configuration:
- **StreamDiffusion** – real-time style transfer and image-to-image on live video
- **StreamDiffusion V2** – second-generation diffusion pipeline, supports video-to-video and image-to-image
@@ -243,5 +234,3 @@ ComfyStream supports the following pipeline types in production (Phase 4, Januar
Full ComfyStream node reference, troubleshooting, and hardware guides.
-
-[//]: # (REVIEW: build/comfystream and build/byoc hrefs – confirm these paths exist or will exist before publication.)
diff --git a/v2/developers/get-started/contributor-quickstart.mdx b/v2/developers/get-started/contributor-quickstart.mdx
index e64701356..a677c4eeb 100644
--- a/v2/developers/get-started/contributor-quickstart.mdx
+++ b/v2/developers/get-started/contributor-quickstart.mdx
@@ -20,7 +20,7 @@ keywords:
pageType: tutorial
purpose: start
audience: developer
-status: current
+status: draft
lastVerified: 2026-04-05T00:00:00.000Z
---
diff --git a/v2/developers/get-started/setup-paths.mdx b/v2/developers/get-started/setup-paths.mdx
index 0c92cbd8b..5153de687 100644
--- a/v2/developers/get-started/setup-paths.mdx
+++ b/v2/developers/get-started/setup-paths.mdx
@@ -52,7 +52,7 @@ Choose the quickstart for your build goal. If you are unsure which applies, use
**SDK options:** `livepeer` (npm v3.5.0 / PyPI), `@livepeer/ai` (alpha, AI-only).
-
+
First API call, error handling, and next steps in under 10 minutes.
@@ -73,7 +73,7 @@ Choose the quickstart for your build goal. If you are unsure which applies, use
**What you build:** A ComfyStream server connected to the Livepeer network, with a live webcam feed processed through an SD1.5 workflow at real-time frame rates.
-
+
ComfyStream install, SD1.5 workflow, and webcam-to-processed-stream in 30 minutes.
@@ -104,8 +104,8 @@ go get github.com/livepeer/livepeer-go
```
-
- Create a stream, get the ingest URL, test with OBS or ffmpeg.
+
+ Start from the maintained API surface before following a credentialed video workflow.
@@ -117,13 +117,13 @@ go get github.com/livepeer/livepeer-go
**Time:** 1-2 hours (includes testnet setup).
-**Prerequisites:** Go 1.21+ for go-livepeer; Python 3.12 for ai-runner or ComfyStream; Docker.
+**Prerequisites:** Go matching `go-livepeer/go.mod`, compatible FFmpeg headers, Foundry Anvil, Node.js, yarn, and git.
**What you build:** A local testnet with one orchestrator and one gateway running, capable of processing test jobs.
-
- Clone go-livepeer, start a local testnet, and submit your first test job.
+
+ Build go-livepeer, deploy local protocol contracts, and run a local orchestrator and gateway.
@@ -149,7 +149,7 @@ go get github.com/livepeer/livepeer-go
Run your own custom model on the network
- BYOC -- see BYOC Guide
+ BYOC -- see BYOC Guide
Transcode video or run a livestream
diff --git a/v2/developers/get-started/transcoding-quickstart.mdx b/v2/developers/get-started/transcoding-quickstart.mdx
index 9447e0844..2ce3a9ee2 100644
--- a/v2/developers/get-started/transcoding-quickstart.mdx
+++ b/v2/developers/get-started/transcoding-quickstart.mdx
@@ -19,6 +19,7 @@ keywords:
'og:image:height': 630
pageType: instruction
audience: developer
+status: draft
lastVerified: 2026-03-17T00:00:00.000Z
purpose: start
---
diff --git a/v2/developers/get-started/video-quickstart.mdx b/v2/developers/get-started/video-quickstart.mdx
index b41b57c5f..24bf99db7 100644
--- a/v2/developers/get-started/video-quickstart.mdx
+++ b/v2/developers/get-started/video-quickstart.mdx
@@ -22,11 +22,6 @@ complexity: beginner
lifecycleStage: setup
---
-{/* RESOLUTION: video-quickstart.mdx is a stub (582 bytes) that overlaps with transcoding-quickstart.mdx.
- Decision: This file serves as a redirect to the canonical transcoding quickstart.
- If docs.json has both paths in nav, remove video-quickstart and keep transcoding-quickstart only.
- If Mintlify redirect frontmatter is not supported, replace this file content with a Note and CardGroup pointing to transcoding-quickstart. */}
-
import { CustomDivider } from '/snippets/components/elements/spacing/Divider.jsx'
import { CenteredContainer } from '/snippets/components/wrappers/containers/Containers.jsx'
diff --git a/v2/developers/guides/beta-projects/naap.mdx b/v2/developers/guides/beta-projects/naap.mdx
index dae4e2548..fd22192ba 100644
--- a/v2/developers/guides/beta-projects/naap.mdx
+++ b/v2/developers/guides/beta-projects/naap.mdx
@@ -20,7 +20,7 @@ keywords:
'og:image:height': 630
pageType: overview
audience: developer
-status: current
+status: draft
lastVerified: 2026-04-21
---
@@ -282,7 +282,7 @@ Full development guides are at [operator.livepeer.org/docs/guides/your-first-plu
## Related pages
-
+
Identity, billing, and payment signing infrastructure. Integrates with NaaP's Developer API Manager as a billing provider via OAuth.
@@ -291,7 +291,7 @@ Full development guides are at [operator.livepeer.org/docs/guides/your-first-plu
Explore grants, RFPs, and open-source contribution paths for ecosystem builders.
-
+
Governance structures and treasury mechanisms that fund Livepeer ecosystem projects.
diff --git a/v2/developers/guides/beta-projects/pymthouse.mdx b/v2/developers/guides/beta-projects/pymthouse.mdx
index ebf211fd6..71a136367 100644
--- a/v2/developers/guides/beta-projects/pymthouse.mdx
+++ b/v2/developers/guides/beta-projects/pymthouse.mdx
@@ -20,7 +20,7 @@ keywords:
'og:image:height': 630
pageType: overview
audience: developer
-status: current
+status: draft
lastVerified: 2026-04-21
---
@@ -201,7 +201,7 @@ Full integration documentation is at [docs.pymthouse.com](https://docs.pymthouse
Review the operational checks before moving an AI integration into production traffic.
-
+
Plugin-based network portal for developers and operators building on Livepeer AI compute.
diff --git a/v2/developers/guides/contribution-guide.mdx b/v2/developers/guides/contribution-guide.mdx
index f15a970f6..9d4baf8a4 100644
--- a/v2/developers/guides/contribution-guide.mdx
+++ b/v2/developers/guides/contribution-guide.mdx
@@ -22,6 +22,7 @@ keywords:
'og:image:width': 1200
'og:image:height': 630
audience: developer
+status: current
lastVerified: 2026-03-17T00:00:00.000Z
purpose: build
---
diff --git a/v2/developers/guides/developer-guides.mdx b/v2/developers/guides/developer-guides.mdx
index d0574ef03..a5203071d 100644
--- a/v2/developers/guides/developer-guides.mdx
+++ b/v2/developers/guides/developer-guides.mdx
@@ -23,6 +23,7 @@ keywords:
'og:image:width': 1200
'og:image:height': 630
audience: developer
+status: draft
lastVerified: 2026-03-17T00:00:00.000Z
purpose: orient
---
@@ -49,12 +50,12 @@ Start with the section that matches the outcome you need.
Get an API key, make your first AI request, and understand the next build steps for application integrations.
-
- Ship a first video workflow for livestreaming, playback, and asset processing.
+
+ Compare the maintained API surfaces before following a credentialed video workflow.
-
- Move into protocol, tooling, and ecosystem contribution paths if you are building beyond app integrations.
+
+ Build go-livepeer, deploy local protocol contracts, and run a local orchestrator and gateway.
@@ -127,23 +128,7 @@ Step-by-step implementation pages for core video workflows.
## Tutorials
-End-to-end build examples that combine multiple Livepeer capabilities.
-
-
-
-
- Build a multi-step application flow around Livepeer AI capabilities.
-
-
-
- Connect Livepeer video workflows to decentralised storage and delivery patterns.
-
-
-
- Implement controlled access patterns for premium or gated video experiences.
-
-
-
+End-to-end tutorials that require external credentials or wallet flows are held out of the public path until their commands are revalidated.
@@ -165,12 +150,8 @@ Use these sections when you need contributor routes, exact reference material, o
Find the API surfaces relevant to video, AI, and infrastructure integrations.
-
- Review pricing-sensitive and throughput-sensitive constraints before you ship production traffic.
-
-
-
- Understand the Python tooling surface and where it fits relative to the other SDK and API options.
+
+ Validate protocol and go-livepeer flows locally before touching public networks.
diff --git a/v2/developers/guides/local-testnet-deployment.mdx b/v2/developers/guides/local-testnet-deployment.mdx
index bb5042034..08aa34d92 100644
--- a/v2/developers/guides/local-testnet-deployment.mdx
+++ b/v2/developers/guides/local-testnet-deployment.mdx
@@ -2,7 +2,7 @@
title: Deploy a local testnet
sidebarTitle: Local testnet
description: >-
- Deploy the full Livepeer protocol stack locally using Hardhat and connect go-livepeer nodes to your own contracts for
+ Deploy the full Livepeer protocol stack locally using Anvil, Hardhat deploy tasks, and go-livepeer nodes for
development and testing.
lifecycleStage: operate
complexity: intermediate
@@ -22,16 +22,17 @@ keywords:
pageType: instruction
purpose: build
audience: developer
-lastVerified: 2026-03-28T00:00:00.000Z
+status: current
+lastVerified: 2026-05-03T00:00:00.000Z
---
import { CustomDivider } from '/snippets/components/elements/spacing/Divider.jsx'
import { LinkArrow } from '/snippets/components/elements/links/Links.jsx'
-Running a local Livepeer stack lets you develop against real protocol contracts without spending ETH or affecting mainnet state. The protocol repo ships a Hardhat deploy script that deploys every contract automatically, seeds the faucet with test LPT, and writes a deployments JSON you can point go-livepeer at.
+Running a local Livepeer stack lets you develop against real protocol contracts without spending ETH or affecting mainnet state. Use Anvil for the local JSON-RPC chain, then use the protocol repo's Hardhat deploy tasks to deploy every contract, seed the faucet with test LPT, and write deployment JSON files that go-livepeer can read.
- local Hardhat deployment only. For Arbitrum Sepolia testnet deployment the same deploy script applies – swap the Hardhat network target for `arbitrumSepolia` and provide a funded Sepolia wallet.
+ Local Anvil deployment only. For Arbitrum Sepolia testnet deployment the same deploy script applies – swap the Hardhat network target for `arbitrumSepolia` and provide a funded Sepolia wallet.
@@ -41,9 +42,12 @@ Running a local Livepeer stack lets you develop against real protocol contracts
You need the following installed before starting:
- **Node.js** v18 or later and **yarn**
-- **Go** 1.21 or later (for building go-livepeer)
- **git**
-- An Ethereum wallet with a private key (for Hardhat, the default accounts are pre-funded – no setup needed)
+- **Foundry Anvil** for the local JSON-RPC chain
+- **Go** matching the version declared in `go-livepeer/go.mod`, plus **make** (for building go-livepeer)
+- **FFmpeg development headers** compatible with the `go-livepeer` branch you build; source builds use `pkg-config` for `libavformat`, `libavfilter`, `libavcodec`, `libavutil`, and `libswscale`
+- **Foundry cast** (optional, for direct contract transaction examples)
+- No funded wallet is required for the local Anvil path; Anvil creates pre-funded local-only test accounts.
@@ -72,44 +76,45 @@ You need the following installed before starting:
This compiles all Solidity contracts in `contracts/` using the compiler version specified in `hardhat.config.ts`. Output goes to `artifacts/`.
-
+
Open a separate terminal and leave this running for the duration of your development session.
```bash
- yarn hardhat node
+ anvil --chain-id 31337 \
+ --host 127.0.0.1 \
+ --port 8545 \
+ --mnemonic "test test test test test test test test test test test junk"
```
- Hardhat starts a local JSON-RPC node at `http://127.0.0.1:8545` with chain ID `31337`. It pre-funds 20 accounts with 10,000 ETH each and prints their private keys to stdout.
+ Anvil starts a local JSON-RPC node at `http://127.0.0.1:8545` with chain ID `31337`. The mnemonic above is the public Anvil/Hardhat development mnemonic. Use it only for disposable local testing.
+
+
+
+ The `protocol` repo's bundled Hardhat node is useful for Solidity-only work, but current go-livepeer builds use go-ethereum RPC calls that send calldata in the JSON-RPC `input` field. Hardhat 2.8.x does not handle that path correctly, so go-livepeer fails when it reads `Controller.getContract(...)`. Anvil handles the same calls correctly.
In your original terminal, run the deploy script against the local node:
```bash
- yarn hardhat deploy --network gethDev
+ yarn hardhat deploy --tags Contracts,Poll --network localhost
```
- The `gethDev` network config (from `deploy/migrations.config.ts`) sets short round lengths and unlock periods suitable for local testing:
+ The `localhost` network points Hardhat at the Anvil node on chain ID `31337`. The `--tags Contracts,Poll` selection matches the repo's deploy script and avoids running standalone deployment files before their dependencies exist.
- | Parameter | Value |
- |---|---|
- | `roundLength` | 50 blocks |
- | `unbondingPeriod` | 7 rounds |
- | `unlockPeriod` | 50 blocks |
- | `faucet.requestAmount` | 10 LPT |
- | `faucet.requestWait` | 1 hour |
-
- On completion, contract addresses are written to `deployments/gethDev/`. The deployer account is set as Controller owner and Governor owner. The faucet is seeded with `6,343,700 LPT` (`genesis.crowdSupply`) and the deployer receives `500,000 LPT` (`genesis.companySupply`).
+ On completion, contract addresses are written to `deployments/localhost/`. The deployer account is set as Controller owner and Governor owner. The faucet is seeded with `6,343,700 LPT` (`genesis.crowdSupply`) and the deployer receives `500,000 LPT` (`genesis.companySupply`).
-
- Every contract address is resolvable from the Controller. Find it in the deployment output:
+
+ Set a short round length for local development, unpause the Controller, then read the Controller address from the deployment output:
```bash
- cat deployments/gethDev/Controller.json | grep '"address"'
+ yarn hardhat set-round-length --roundlength 50 --network localhost
+ yarn hardhat unpause --network localhost
+ cat deployments/localhost/Controller.json | grep '"address"'
```
- You will need this address to configure go-livepeer in the next section.
+ The Controller starts paused after deployment. You must unpause it before faucet, bonding, and round-initialisation calls can succeed. You will need the Controller address to configure go-livepeer in the next section.
@@ -153,70 +158,130 @@ With contracts deployed, you can run go-livepeer nodes against your local stack.
make
```
- The binary is built to `./livepeer`.
+ The binary is built to `./livepeer`. If the build fails in `github.com/livepeer/lpms/ffmpeg`, install FFmpeg development headers that match the `lpms` version used by the branch you checked out. The go-livepeer CI path builds a pinned FFmpeg through `github.com/livepeer/lpms/ffmpeg/install_ffmpeg.sh`; distro FFmpeg headers may be too old for current branches.
From your protocol repo:
```bash
- cat deployments/gethDev/Controller.json | grep '"address"'
+ cat deployments/localhost/Controller.json | grep '"address"'
# "address": "0x5FbDB2315678afecb367f032d93F642f64180aa3"
```
+
+ In the `protocol` repo terminal, generate geth-compatible keystores for the first two Anvil accounts. These are public development accounts from the mnemonic above; do not use this pattern for any real wallet.
+
+ ```bash
+ mkdir -p ~/.livepeer-local/keystores
+
+ node <<'NODE'
+ const fs = require("fs");
+ const path = require("path");
+ const { ethers } = require("ethers");
+
+ const outRoot = path.join(process.env.HOME, ".livepeer-local", "keystores");
+ const mnemonic = "test test test test test test test test test test test junk";
+ const accounts = [
+ { name: "orchestrator", index: 0 },
+ { name: "gateway", index: 1 },
+ ];
+
+ (async () => {
+ for (const account of accounts) {
+ const wallet = ethers.Wallet.fromMnemonic(
+ mnemonic,
+ `m/44'/60'/0'/0/${account.index}`
+ );
+ const dir = path.join(outRoot, account.name);
+ fs.mkdirSync(dir, { recursive: true });
+ for (const file of fs.readdirSync(dir)) {
+ fs.rmSync(path.join(dir, file), { force: true });
+ }
+ const encrypted = await wallet.encrypt("");
+ const file = path.join(dir, `UTC--anvil-${account.name}-${wallet.address}`);
+ fs.writeFileSync(file, encrypted + "\n", { mode: 0o600 });
+ console.log(`${account.name}: ${wallet.address} ${dir}`);
+ }
+ })().catch((error) => {
+ console.error(error);
+ process.exit(1);
+ });
+ NODE
+ ```
+
+
- Replace `` with the address from the previous step and `` with a path to an Ethereum keystore file (you can export one of the Hardhat accounts).
+ Replace `` with the address from the previous step.
```bash
./livepeer \
- -network offchain \
+ -network=devenv \
-ethUrl http://127.0.0.1:8545 \
-ethController \
- -ethKeystorePath \
+ -ethAcctAddr 0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266 \
+ -ethKeystorePath ~/.livepeer-local/keystores/orchestrator \
-ethPassword "" \
- -orchestrator \
- -transcoder \
+ -dataDir ~/.livepeer-local/orchestrator-data \
+ -cliAddr 127.0.0.1:7935 \
+ -httpAddr 127.0.0.1:8935 \
-serviceAddr 127.0.0.1:8935 \
+ -orchestrator=true \
+ -transcoder=true \
-pricePerUnit 0 \
- -initializeRound
+ -initializeRound=true \
+ -blockPollingInterval 1 \
+ -monitor=false \
+ -currentManifest=true \
+ -startupAvailabilityCheck=false \
+ -testTranscoder=false
```
The key flags for local contract targeting:
| Flag | Purpose |
|---|---|
- | `-network offchain` | Disables built-in network configs so `-ethController` is used directly |
- | `-ethUrl` | JSON-RPC endpoint of your local Hardhat node |
+ | `-network=devenv` | Uses a contract-backed local development network |
+ | `-ethUrl` | JSON-RPC endpoint of your local Anvil node |
| `-ethController` | Address of the Controller contract from your deployment |
+ | `-ethAcctAddr` | Local Anvil account that matches the keystore directory |
| `-initializeRound` | Automatically initialises a new round when needed; essential for local testing |
- In a separate terminal, using a different keystore account:
+ In a separate terminal, use the second Anvil account and separate local ports:
```bash
./livepeer \
- -network offchain \
+ -network=devenv \
-ethUrl http://127.0.0.1:8545 \
-ethController \
- -ethKeystorePath \
+ -ethAcctAddr 0x70997970C51812dc3A010C7d01b50e0d17dc79C8 \
+ -ethKeystorePath ~/.livepeer-local/keystores/gateway \
-ethPassword "" \
- -gateway \
+ -dataDir ~/.livepeer-local/gateway-data \
+ -cliAddr 127.0.0.1:7936 \
+ -httpAddr 127.0.0.1:8936 \
+ -rtmpAddr 127.0.0.1:1936 \
+ -gateway=true \
-orchAddr 127.0.0.1:8935 \
- -maxPricePerUnit 0
+ -maxPricePerUnit 0 \
+ -blockPollingInterval 1 \
+ -monitor=false
```
Using `-orchAddr` directly bypasses the ServiceRegistry lookup and connects the gateway to your local orchestrator immediately.
- The faucet address is in `deployments/gethDev/LivepeerTokenFaucet.json`. Call `request()` directly using cast or the go-livepeer CLI:
+ The faucet address is in `deployments/localhost/LivepeerTokenFaucet.json`. Call `request()` from one of Anvil's unlocked local accounts:
```bash
cast send "request()" \
--rpc-url http://127.0.0.1:8545 \
- --private-key
+ --from 0x70997970C51812dc3A010C7d01b50e0d17dc79C8 \
+ --unlocked
```
Each call transfers 10 LPT to the caller. The rate limit is 1 hour between requests, but whitelisted addresses bypass it entirely. Add your test addresses to the whitelist by calling `addToWhitelist(address)` from the deployer account.
@@ -230,13 +295,15 @@ With contracts deployed, you can run go-livepeer nodes against your local stack.
cast send \
"approve(address,uint256)" 1000000000000000000000 \
--rpc-url http://127.0.0.1:8545 \
- --private-key
+ --from 0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266 \
+ --unlocked
# Bond 1000 LPT to yourself (self-delegation = orchestrator registration)
cast send \
- "bond(uint256,address)" 1000000000000000000000 \
+ "bond(uint256,address)" 1000000000000000000000 0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266 \
--rpc-url http://127.0.0.1:8545 \
- --private-key
+ --from 0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266 \
+ --unlocked
```
After the next round initialises (triggered automatically by `-initializeRound` on your orchestrator node), your orchestrator enters the active set and can call `reward()`.
@@ -250,7 +317,7 @@ With contracts deployed, you can run go-livepeer nodes against your local stack.
The same deploy script targets Arbitrum Sepolia with a different network name. You need a funded Sepolia wallet (Sepolia ETH from a faucet) and an Arbitrum Sepolia RPC URL.
```bash
-# Set your deployer private key
+# Set a funded testnet deployer private key. Never use a mainnet wallet key here.
export PRIVATE_KEY=0x...
# Deploy to Arbitrum Sepolia
diff --git a/v2/developers/guides/opportunities/grants-and-programmes.mdx b/v2/developers/guides/opportunities/grants-and-programmes.mdx
index 257afc6aa..077a1dcbb 100644
--- a/v2/developers/guides/opportunities/grants-and-programmes.mdx
+++ b/v2/developers/guides/opportunities/grants-and-programmes.mdx
@@ -78,7 +78,7 @@ Special Purpose Entities (SPEs) are treasury-funded bodies approved by LPT gover
**Status:** Active (Messari-tracked).
- **Key output:** [Build an AI Agent on Livepeer](/v2/developers/guides/tutorials/build-an-ai-agent-on-livepeer) tutorial.
+ **Key output:** AI agent and avatar infrastructure work; public tutorial validation is pending.
@@ -88,7 +88,7 @@ Special Purpose Entities (SPEs) are treasury-funded bodies approved by LPT gover
**Status:** Active. Pipeline live in production.
- **Key output:** LLM pipeline available at `POST /llm` via the AI Gateway API. See [AI on Livepeer](/v2/developers/concepts/ai-on-livepeer) for usage.
+ **Key output:** LLM pipeline available at `POST /llm` via the AI Gateway API. See [AI on Livepeer](/v2/developers/concepts/role) for usage.
@@ -106,7 +106,7 @@ Special Purpose Entities (SPEs) are treasury-funded bodies approved by LPT gover
- If you are building an AI agent, a VTuber product, or any application requiring decentralised LLM inference, the Agent SPE, LLM SPE, and Cloud SPE outputs are production-ready today. Start at the [AI Quickstart](/v2/developers/get-started/ai-quickstart) to make your first inference call.
+ If you are building an AI agent, a VTuber product, or any application requiring decentralised LLM inference, the Agent SPE, LLM SPE, and Cloud SPE outputs are production-ready today. Start at the [AI Quickstart](/v2/developers/resources/reference/apis) to make your first inference call.
diff --git a/v2/developers/guides/opportunities/rfps-and-proposals.mdx b/v2/developers/guides/opportunities/rfps-and-proposals.mdx
index d67408a4c..b199cdc9a 100644
--- a/v2/developers/guides/opportunities/rfps-and-proposals.mdx
+++ b/v2/developers/guides/opportunities/rfps-and-proposals.mdx
@@ -94,7 +94,7 @@ Examples of recent RFPs issued by the Foundation include a comprehensive documen
## SPE Treasury Proposals
-The Livepeer onchain treasury is a community-governed funding mechanism that has been accruing 10% of LPT inflation rewards since December 2023. With reserves of approximately 500,000 LPT and generating around 150,000 LPT per quarter, the treasury funds Special Purpose Entities (SPEs).
+The Livepeer onchain treasury is a community-governed funding mechanism that receives a protocol-level share of LPT inflation rewards. The current treasury reward cut is 10%, with an on-chain treasury balance ceiling of 750,000 LPT. Check the Livepeer Explorer or the live contracts before quoting current treasury balances or quarterly inflows.
An SPE is a working group funded by the community treasury to deliver on a specific mission. SPEs are accountable to the token-holder community for delivery and publish regular updates on the Forum.
diff --git a/v2/developers/guides/tutorials/build-an-ai-agent-on-livepeer.mdx b/v2/developers/guides/tutorials/build-an-ai-agent-on-livepeer.mdx
index 86b65421a..c843c090f 100644
--- a/v2/developers/guides/tutorials/build-an-ai-agent-on-livepeer.mdx
+++ b/v2/developers/guides/tutorials/build-an-ai-agent-on-livepeer.mdx
@@ -21,7 +21,7 @@ keywords:
pageType: tutorial
purpose: build
audience: developer
-status: current
+status: draft
lastVerified: 2026-04-05T00:00:00.000Z
---
diff --git a/v2/developers/guides/tutorials/ipfs-video-integration.mdx b/v2/developers/guides/tutorials/ipfs-video-integration.mdx
index e0c8e57f9..f42f73d5d 100644
--- a/v2/developers/guides/tutorials/ipfs-video-integration.mdx
+++ b/v2/developers/guides/tutorials/ipfs-video-integration.mdx
@@ -20,7 +20,7 @@ keywords:
pageType: tutorial
purpose: build
audience: developer
-status: current
+status: draft
lastVerified: 2026-04-05T00:00:00.000Z
---
diff --git a/v2/developers/guides/tutorials/token-gated-video.mdx b/v2/developers/guides/tutorials/token-gated-video.mdx
index d2decaaf4..f56bf6917 100644
--- a/v2/developers/guides/tutorials/token-gated-video.mdx
+++ b/v2/developers/guides/tutorials/token-gated-video.mdx
@@ -20,7 +20,7 @@ keywords:
pageType: tutorial
purpose: build
audience: developer
-status: current
+status: draft
lastVerified: 2026-04-05T00:00:00.000Z
---
diff --git a/v2/developers/navigator.mdx b/v2/developers/navigator.mdx
index 6a0f71aa9..a8525a846 100644
--- a/v2/developers/navigator.mdx
+++ b/v2/developers/navigator.mdx
@@ -26,34 +26,6 @@ status: current
lastVerified: 2026-04-05T00:00:00.000Z
---
-{/*
-TO DO:
-Navigator must route the following personas:
--> Solutions Integrators -> looking for end products like Daydream, Studio, Streamplace, Embody
--> AI Video Developers -> building custom AI video workflows, models, or applications
--> Video Developers -> building streaming or transcoding applications
--> Protocol Contributors -> building or improving the Livepeer protocol codebase
--> Evaluators -> exploring Livepeer for potential use, but not yet building
-
-Developers Tab Main Persona:
-- AI Integrators
-- Building directly on the Livepeer Netwrok
-- Self Hosting
-- Pre-gateway application developer
-
-Items included on this page:
-- Naap, PymtHouse, ComfyStream, BYOC, AI Gateway API, Studio API, Protocol Contribution
-- Other activ repo's (sotryboard, livepeer-data-mcp, ai-runner, comfyui, etc) can be linked from the relevant paths, but not listed here to avoid overwhelm
-
-Needed
-- Decision tree to guide users to the right path
-- Comparison table of paths
-- Links to quickstarts and concept pages for each path
-- Clear descriptions of each path, tools, requirements, and use cases
-- Emphasis on AI video use cases, but also cover streaming/transcoding and protocol contribution
-
-*/}
-
import { LinkArrow } from '/snippets/components/elements/links/Links.jsx'
import { StyledTable, TableRow, TableCell } from '/snippets/components/displays/tables/Tables.jsx'
import { CustomDivider } from '/snippets/components/elements/spacing/Divider.jsx'
@@ -87,7 +59,7 @@ Livepeer supports AI inference, real-time AI video, video transcoding, custom co
-d '{"prompt": "a glowing neural network in a dark room", "model_id": "SG161222/RealVisXL_V4.0_Lightning"}'
```
-
+
@@ -101,7 +73,7 @@ Livepeer supports AI inference, real-time AI video, video transcoding, custom co
Supported models: StreamDiffusion, ControlNet, IPAdapter, FaceID, LoRA, Whisper (audio), Gemma (video understanding), SuperResolution.
-
+
@@ -113,9 +85,9 @@ Livepeer supports AI inference, real-time AI video, video transcoding, custom co
Requirements: Docker on Linux with NVIDIA GPU, Python model code (PyTorch recommended).
- BYOC reached production-grade in Phase 4 (January 2026). Embody SPE and Streamplace run production BYOC workloads today.
+ BYOC is an advanced operator path. Confirm the current gateway and orchestrator support matrix before treating a custom container workflow as production-ready.
-
+
@@ -129,7 +101,7 @@ Livepeer supports AI inference, real-time AI video, video transcoding, custom co
Free tier: 1,000 transcoding minutes per month. Growth: $100/month minimum.
-
+
@@ -139,9 +111,9 @@ Livepeer supports AI inference, real-time AI video, video transcoding, custom co
Four primary repositories: go-livepeer (protocol node, Go), ai-runner (AI inference runtime, Python), ComfyStream (real-time AI video, Python), protocol (Solidity contracts).
- Requirements: Go 1.21+ for go-livepeer; Python 3.12 for ComfyStream and PyTrickle; local testnet for integration testing.
+ Requirements: the Go version declared in `go-livepeer/go.mod`; Python 3.12 for ComfyStream and PyTrickle; local testnet for integration testing.
-
+
@@ -156,7 +128,7 @@ Livepeer supports AI inference, real-time AI video, video transcoding, custom co
- Studio-managed vs self-hosted gateway -- managed is simpler; self-hosted controls cost at scale
- Standard pipelines vs custom model -- BYOC for fully custom; gateway API for supported pipelines
-
then
+
then
@@ -226,7 +198,7 @@ Livepeer supports AI inference, real-time AI video, video transcoding, custom co
## Related pages
-
+
Understand the developer stack, AI pipelines, video infrastructure, and the OSS codebase.
diff --git a/v2/developers/portal.mdx b/v2/developers/portal.mdx
index 7fcca28eb..d7e9bdbc1 100644
--- a/v2/developers/portal.mdx
+++ b/v2/developers/portal.mdx
@@ -36,6 +36,7 @@ keywords:
tag: Start Here
pageType: navigation
audience: developer
+status: current
lastVerified: 2026-03-17T00:00:00.000Z
purpose: orient
---
@@ -45,16 +46,8 @@ import { CustomDivider } from '/snippets/components/elements/spacing/Divider.jsx
import { BlinkingIcon } from '/snippets/components/elements/icons/Icons.jsx'
import { Starfield } from "/snippets/components/scaffolding/heroes/StarfieldCanvas.jsx";
-{/*
- This TAB should be a reference for AI pipelines & video streaming.
- E.g BYOC, ComfyStream and other AI pipelines, and video streaming should be explained in this sections.
-
- I think Products (developer platforms) should be in their own tab / section - there is too much going on here otherwise.
- */}
-
- {/* HeroImageBackgroundComponent: Full-width Starfield Background - fills entire content area */}
@@ -105,7 +98,7 @@ import { Starfield } from "/snippets/components/scaffolding/heroes/StarfieldCanv
Start with livestreaming, playback, and video upload workflows on Livepeer.
@@ -113,7 +106,7 @@ import { Starfield } from "/snippets/components/scaffolding/heroes/StarfieldCanv
Call Livepeer AI endpoints first, then move into SDK, BYOC, and ComfyStream workflows.
@@ -149,7 +142,7 @@ import { Starfield } from "/snippets/components/scaffolding/heroes/StarfieldCanv
Browse implementation guides, tutorials, reference pages, and knowledge-hub resources.
diff --git a/v2/developers/resources/compendium/developer-help.mdx b/v2/developers/resources/compendium/developer-help.mdx
index 9ff9974bc..8d7fdfa8d 100644
--- a/v2/developers/resources/compendium/developer-help.mdx
+++ b/v2/developers/resources/compendium/developer-help.mdx
@@ -23,6 +23,7 @@ keywords:
'og:image:height': 630
audience: developer
purpose: troubleshoot
+status: current
lastVerified: "2026-03-03"
---
import { GotoCard } from '/snippets/components/elements/links/Links.jsx'
diff --git a/v2/developers/resources/compendium/example-applications.mdx b/v2/developers/resources/compendium/example-applications.mdx
index 1760d0e4c..33122bb26 100644
--- a/v2/developers/resources/compendium/example-applications.mdx
+++ b/v2/developers/resources/compendium/example-applications.mdx
@@ -5,6 +5,7 @@ lifecycleStage: discover
complexity: intermediate
audience: developer
icon: grid-round-2-plus
+status: current
---
Enjoy a curated collection of Livepeer example applications and integrations.
diff --git a/v2/developers/resources/compendium/resources.mdx b/v2/developers/resources/compendium/resources.mdx
index 1f1c4663d..d557ec8b6 100644
--- a/v2/developers/resources/compendium/resources.mdx
+++ b/v2/developers/resources/compendium/resources.mdx
@@ -22,6 +22,7 @@ keywords:
'og:image:width': 1200
'og:image:height': 630
audience: developer
+status: current
lastVerified: 2026-03-17T00:00:00.000Z
purpose: orient
---
diff --git a/v2/developers/resources/knowledge-hub/awesome-livepeer.mdx b/v2/developers/resources/knowledge-hub/awesome-livepeer.mdx
index 5905b7221..a7e15f60f 100644
--- a/v2/developers/resources/knowledge-hub/awesome-livepeer.mdx
+++ b/v2/developers/resources/knowledge-hub/awesome-livepeer.mdx
@@ -17,6 +17,7 @@ keywords:
'og:image:width': 1200
'og:image:height': 630
audience: developer
+status: current
lastVerified: 2026-03-17T00:00:00.000Z
purpose: orient
---
diff --git a/v2/developers/resources/knowledge-hub/deepwiki.mdx b/v2/developers/resources/knowledge-hub/deepwiki.mdx
index e33a41176..a866b1a40 100644
--- a/v2/developers/resources/knowledge-hub/deepwiki.mdx
+++ b/v2/developers/resources/knowledge-hub/deepwiki.mdx
@@ -18,6 +18,7 @@ keywords:
'og:image:width': 1200
'og:image:height': 630
audience: developer
+status: current
lastVerified: 2026-03-17T00:00:00.000Z
purpose: orient
---
diff --git a/v2/developers/resources/knowledge-hub/wiki.mdx b/v2/developers/resources/knowledge-hub/wiki.mdx
index 80b9618f0..ba908e598 100644
--- a/v2/developers/resources/knowledge-hub/wiki.mdx
+++ b/v2/developers/resources/knowledge-hub/wiki.mdx
@@ -16,6 +16,7 @@ keywords:
'og:image:width': 1200
'og:image:height': 630
audience: developer
+status: current
lastVerified: 2026-03-17T00:00:00.000Z
purpose: orient
---
diff --git a/v2/developers/resources/reference/apis.mdx b/v2/developers/resources/reference/apis.mdx
index cb0fc20c4..274fefded 100644
--- a/v2/developers/resources/reference/apis.mdx
+++ b/v2/developers/resources/reference/apis.mdx
@@ -89,7 +89,7 @@ import { CenteredContainer } from '/snippets/components/wrappers/containers/Cont
### Rate limits
-Rate limits are per API key and visible in the Studio dashboard under **Settings > API Keys**. The default limits for the Growth tier are sufficient for most production applications. Contact support for Enterprise rate limit increases.
+Rate limits are per API key and visible in the Studio dashboard under **Settings > API Keys**. Check the dashboard for the active limit on your account and contact support if your production workload needs a higher limit.
@@ -210,10 +210,10 @@ The live interactive reference for the AI API is mounted in the Gateways tab doc
Official SDK packages for TypeScript, Python, Go, and React.
-
+
Full examples, error handling, and retry configuration.
-
+
API key types and how to use them correctly.
diff --git a/v2/developers/resources/reference/pricing-rate-limits.mdx b/v2/developers/resources/reference/pricing-rate-limits.mdx
index 952cf476f..50cab0a63 100644
--- a/v2/developers/resources/reference/pricing-rate-limits.mdx
+++ b/v2/developers/resources/reference/pricing-rate-limits.mdx
@@ -18,7 +18,7 @@ keywords:
pageType: reference
purpose: reference
audience: developer
-status: current
+status: draft
lastVerified: 2026-04-05T00:00:00.000Z
---
diff --git a/v2/developers/resources/reference/pytrickle.mdx b/v2/developers/resources/reference/pytrickle.mdx
index 2dc17fb4f..05f9f6165 100644
--- a/v2/developers/resources/reference/pytrickle.mdx
+++ b/v2/developers/resources/reference/pytrickle.mdx
@@ -21,7 +21,7 @@ keywords:
pageType: reference
purpose: reference
audience: developer
-status: current
+status: draft
lastVerified: 2026-04-05T00:00:00.000Z
---
diff --git a/v2/developers/resources/reference/sdks.mdx b/v2/developers/resources/reference/sdks.mdx
index 77dfadf67..5b4664899 100644
--- a/v2/developers/resources/reference/sdks.mdx
+++ b/v2/developers/resources/reference/sdks.mdx
@@ -242,7 +242,7 @@ Do not use these packages in new projects:
## Related pages
-
+
Full code examples, error handling, and retry configuration for all SDKs.
diff --git a/v2/developers2/concepts/ecosystem-map.mdx b/v2/developers2/concepts/ecosystem-map.mdx
index 4400567f7..b5c6eeb18 100644
--- a/v2/developers2/concepts/ecosystem-map.mdx
+++ b/v2/developers2/concepts/ecosystem-map.mdx
@@ -27,10 +27,10 @@ status: current
lastVerified: 2026-04-21
---
-import { LinkArrow } from '/snippets/components/primitives/links.jsx'
-import { StyledTable, TableRow, TableCell } from '/snippets/components/layout/tables.jsx'
-import { CustomDivider } from '/snippets/components/primitives/divider.jsx'
-import { CenteredContainer } from '/snippets/components/layout/containers.jsx'
+import { LinkArrow } from '/snippets/components/elements/links/Links.jsx'
+import { StyledTable, TableRow, TableCell } from '/snippets/components/displays/tables/Tables.jsx'
+import { CustomDivider } from '/snippets/components/elements/spacing/Divider.jsx'
+import { CenteredContainer } from '/snippets/components/wrappers/containers/Containers.jsx'
The Livepeer GitHub organisation hosts 173 public repositories. This page maps the ones that matter for developers — organised by stack layer, with the role each repo plays and where to start.