Summary
Cloudflare AI Gateway supports custom headers like `cf-aig-metadata` (request tagging / correlation), `cf-aig-cache-key` (cache key hinting), and `cf-aig-cache-ttl` (per-request cache TTL). llm-providers accepts a `baseUrl` override per provider (which is how consumers currently route through AI Gateway), but has no first-class support for attaching Gateway-specific metadata per request.
Motivation
AEGIS daemon runs on Cloudflare Workers and uses AI Gateway in production for:
- Per-tenant request tagging for cost attribution
- Cache key hinting for repeat dispatches
- Per-request cache TTL tuning
- Correlation IDs for tracing
Today the daemon has to either (a) prepend metadata manually to every request it builds, or (b) skip the metadata entirely and lose observability. Both are friction; (a) is repetitive; (b) is lossy.
First-class Gateway metadata support in llm-providers would make AI Gateway users (which is likely most consumers on Cloudflare) first-class citizens.
Proposed API
Add an optional `gatewayMetadata` field to `LLMRequest`:
```ts
interface GatewayMetadata {
requestId?: string; // cf-aig-metadata (or header-ified)
cacheKey?: string; // cf-aig-cache-key
cacheTtl?: number; // cf-aig-cache-ttl (seconds)
customMetadata?: Record<string, string>; // cf-aig-metadata JSON blob
}
interface LLMRequest {
// ... existing fields
gatewayMetadata?: GatewayMetadata;
}
```
Providers forward these as `cf-aig-*` headers only when `baseUrl` matches the Cloudflare AI Gateway URL pattern (`https://gateway.ai.cloudflare.com/v1/*\`). For non-Gateway `baseUrl` values, the field is ignored (no-op). This keeps the feature Cloudflare-specific without polluting provider call semantics when Gateway isn't in use.
Non-goals
- Not trying to cover every AI Gateway feature (e.g., BYOK, rate limiting, workflows) — just metadata that callers want to tag per-request
- Not prescribing how consumers use the metadata — just forwarding it faithfully
Priority
LOW — nice-to-have observability and cost-attribution improvement. Not a migration blocker. File and triage when there's bandwidth.
Related
🤖 Filed by AEGIS during Phase D scoping session
Summary
Cloudflare AI Gateway supports custom headers like `cf-aig-metadata` (request tagging / correlation), `cf-aig-cache-key` (cache key hinting), and `cf-aig-cache-ttl` (per-request cache TTL). llm-providers accepts a `baseUrl` override per provider (which is how consumers currently route through AI Gateway), but has no first-class support for attaching Gateway-specific metadata per request.
Motivation
AEGIS daemon runs on Cloudflare Workers and uses AI Gateway in production for:
Today the daemon has to either (a) prepend metadata manually to every request it builds, or (b) skip the metadata entirely and lose observability. Both are friction; (a) is repetitive; (b) is lossy.
First-class Gateway metadata support in llm-providers would make AI Gateway users (which is likely most consumers on Cloudflare) first-class citizens.
Proposed API
Add an optional `gatewayMetadata` field to `LLMRequest`:
```ts
interface GatewayMetadata {
requestId?: string; // cf-aig-metadata (or header-ified)
cacheKey?: string; // cf-aig-cache-key
cacheTtl?: number; // cf-aig-cache-ttl (seconds)
customMetadata?: Record<string, string>; // cf-aig-metadata JSON blob
}
interface LLMRequest {
// ... existing fields
gatewayMetadata?: GatewayMetadata;
}
```
Providers forward these as `cf-aig-*` headers only when `baseUrl` matches the Cloudflare AI Gateway URL pattern (`https://gateway.ai.cloudflare.com/v1/*\`). For non-Gateway `baseUrl` values, the field is ignored (no-op). This keeps the feature Cloudflare-specific without polluting provider call semantics when Gateway isn't in use.
Non-goals
Priority
LOW — nice-to-have observability and cost-attribution improvement. Not a migration blocker. File and triage when there's bandwidth.
Related
🤖 Filed by AEGIS during Phase D scoping session