Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions docs.json
Original file line number Diff line number Diff line change
Expand Up @@ -1090,6 +1090,7 @@
"inference/response-settings/json-mode",
"inference/response-settings/reasoning",
"inference/response-settings/streaming",
"inference/response-settings/prefix-caching",
"inference/response-settings/structured-output",
"inference/response-settings/tool-calling"
]
Expand Down
96 changes: 96 additions & 0 deletions inference/response-settings/prefix-caching.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,96 @@
---
title: "Prefix Caching"
description: "Reduce latency for repeated prompts with prefix caching, and isolate cache reuse with cache_salt when needed."
---

W&B Inference uses prefix caching on supported hosted models to speed up repeated requests with identical prompt prefixes.

When a request shares the same prompt prefix as an earlier request on the same backend, the model can reuse previously computed KV cache for that prefix instead of recomputing it from scratch. This can reduce latency for repeated prompts, long system prompts, and workloads with a stable shared prefix.

Prefix caching is automatic on supported models. You do not need to enable it in your request.

## When prefix caching helps

Prefix caching is most useful when you repeatedly send requests that share a long common prefix, such as:

- A large system prompt reused across many requests.
- A long shared document followed by different user questions.
- Repeated evaluation prompts with only small per-request changes.
- Multi-turn workloads where much of the conversation history stays the same.

## Cache Isolation

By default, identical prompt prefixes may reuse cache on shared infrastructure when the backend allows it.

If you want to isolate cache reuse to a specific trust boundary, set the `cache_salt` request parameter. Requests only reuse prefix cache when both the prompt prefix and the `cache_salt` match.

Use `cache_salt` when you want cache reuse within a single user, tenant, session, or application boundary, but do not want reuse across other callers.

### How it works

- Same prompt prefix, no `cache_salt`: cache may be reused across matching requests.
- Same prompt prefix, same `cache_salt`: cache can be reused.
- Same prompt prefix, different `cache_salt`: cache is isolated and will not be reused across salts.

<Note>
`cache_salt` must be a non-empty string when provided.
</Note>

## Examples

<Tabs>
<Tab title="Python">
```python
import openai

client = openai.OpenAI(
base_url="https://api.inference.wandb.ai/v1",
api_key="<your-api-key>",
)

response = client.chat.completions.create(
model="moonshotai/Kimi-K2.5",
messages=[
{
"role": "system",
"content": "You are a careful assistant that answers concisely."
},
{
"role": "user",
"content": "Summarize this document in one sentence: <long shared prefix here>"
},
],
cache_salt="tenant-a-user-123-secret",
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@corbt I don't think this works - gives you TypeError: Completions.create() got an unexpected keyword argument 'cache_salt'

I think you can replace with something like:

    extra_body={
        "cache_salt": "tenant-a-user-123-secret",
    },

)

print(response.choices[0].message.content)
```
</Tab>

<Tab title="Bash">
```bash
curl https://api.inference.wandb.ai/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer <your-api-key>" \
-d '{
"model": "moonshotai/Kimi-K2.5",
"messages": [
{ "role": "system", "content": "You are a careful assistant that answers concisely." },
{ "role": "user", "content": "Summarize this document in one sentence: <long shared prefix here>" }
],
"cache_salt": "tenant-a-user-123-secret"
}'
```
</Tab>
</Tabs>

## Response behavior

On some models, usage details may include cached token counts in `usage.prompt_tokens_details.cached_tokens` when prefix cache is reused. Availability of that field may vary by model and backend.

## Related pages

- [Chat Completions](/inference/api-reference/chat-completions)
- [Enable streaming responses](/inference/response-settings/streaming)
- [Structured output](/inference/response-settings/structured-output)
- [JSON mode](/inference/response-settings/json-mode)
Loading