Fork Notice
This project is a fork of oleiade/xk6-kv, extended with additional atomic primitives and random-key utilities:
- Atomic operations:
incrementBy,getOrSet,swap,compareAndSwap,deleteIfExists,compareAndDelete.- Random keys:
randomKey()plus batchrandomKeys()for key-only sampling workflows.- Optional key tracking for faster random key sampling (disk & memory backends).
- Disk backend path overrides for custom artifact persistence.
All code is licensed under GNU AGPL v3.0.
A k6 extension providing a persistent key-value store to share state across Virtual Users (VUs). Supports both in-memory and persistent bbolt backends, deterministic list() ordering, and high-level atomic helpers designed for safe concurrent access in load tests.
- 🔒 Thread-Safe: Secure state sharing across Virtual Users
- 🔄 Flexible Storage: In-memory (ephemeral) or disk-based (persistent) backends
- 📊 Atomic Operations: Increment, compare-and-swap, and more
- 🎲 Random Key Selection: Uniform sampling with optional prefix filtering
- 🔍 Key Tracking: Optional faster random key access via in-memory indexing
- 🏷️ Prefix Support: Filter operations by key prefixes
- 📦 Stable bbolt Bucket: Disk backend always uses the
k6bucket (original upstream bug tied it to the DB path and could orphan data - now fixed) - 🧭 Cursor Scanning: Stream large datasets via
scan()/scanKeys()with continuation tokens - 📝 Serialization: JSON or string serialization
- 💾 Snapshots: Export/import bbolt files for backups and data seeding
- 📘 TypeScript Support: Full type declarations for IntelliSense and type safety
- ⚡ High Performance: Optimized for concurrent workloads
- 🔀 Automatic Sharding: Memory backend automatically shards data across CPU cores for 2-3.5x performance gains on multi-core systems
- 🧰 bbolt tuning knobs: Exposes timeout, noSync/noGrowSync/noFreelistSync, freelist type, preLoadFreelist, initial mmap size, and mlock so advanced users can dial back fsync overhead when durability trade-offs are acceptable
Download k6 binaries with xk6-kv from the Releases page.
Release artifacts are named xk6-kv_<version>_<os>_<arch>.tar.gz for Linux/macOS and
xk6-kv_<version>_<os>_<arch>.zip for Windows. Set VERSION to the release you want
to install, for example v1.3.6.
Linux:
VERSION=v1.3.6
curl -L "https://github.com/oshokin/xk6-kv/releases/download/${VERSION}/xk6-kv_${VERSION}_linux_amd64.tar.gz" -o k6.tar.gz
tar -xzf k6.tar.gz && chmod +x k6
./k6 versionmacOS:
VERSION=v1.3.6
curl -L "https://github.com/oshokin/xk6-kv/releases/download/${VERSION}/xk6-kv_${VERSION}_darwin_arm64.tar.gz" -o k6.tar.gz
tar -xzf k6.tar.gz && chmod +x k6
./k6 versionWindows (PowerShell):
$VERSION = "v1.3.6"
Invoke-WebRequest -Uri "https://github.com/oshokin/xk6-kv/releases/download/$VERSION/xk6-kv_${VERSION}_windows_amd64.zip" -OutFile k6.zip
Expand-Archive -Path k6.zip -DestinationPath .
.\k6.exe version- Install xk6:
go install go.k6.io/xk6/cmd/xk6@latest- Build k6 with xk6-kv:
# Latest version
xk6 build --with github.com/oshokin/xk6-kv@latest
# Specific version
xk6 build --with github.com/oshokin/xk6-kv@v1.3.6- Verify:
./k6 versionRequirements: Go 1.25 or higher.
This release targets k6 v1.7.x and the go.k6.io/k6 v1 Go module path.
k6 v2 compatibility is intentionally tracked separately because k6 v2 changed the Go module path to go.k6.io/k6/v2 and is still a release candidate.
import { openKv } from "k6/x/kv";
const kv = openKv(); // Default: disk backend
export async function setup() {
await kv.clear();
}
export default async function () {
await kv.set("foo", "bar");
await kv.set("user:1", { name: "Alice" });
const key = await kv.randomKey({ prefix: "user:" });
if (key) {
console.log(`Random user: ${key}`);
}
const entries = await kv.list({ prefix: "user:" });
console.log(`Found ${entries.length} users`);
}
export async function teardown() {
kv.close();
}Async note: every
kv.*helper returns a Promise. In k6 scripts you should mark yoursetup,default, or helper functions asasync(or use.then) andawaiteach call—otherwise errors are swallowed and the operation may never finish.Concurrency note: avoid unbounded
Promise.all()with very large KV batches (for example, tens of thousands ofkv.set()calls at once). PrefersetMany()for object-map writes, sequentialawait, or bounded concurrency.
Bounded concurrency helper:
async function mapLimit(items, limit, fn) {
const width = Math.max(1, Math.min(limit, items.length || 1));
const workers = Array.from({ length: width }, async (_, worker) => {
for (let i = worker; i < items.length; i += width) {
await fn(items[i], i);
}
});
await Promise.all(workers);
}Every rejected kv.* promise carries a typed error object with name and message fields. It is a structured plain object, not a JavaScript Error instance, so do not rely on err instanceof Error. Check err.name (or err.Name when k6 serialises it as Go struct) instead of matching raw strings.
Batch APIs such as setMany() may also include a stable err.errors array. Each item in err.errors has this shape:
{
key?: string
name: string
message: string
}General error handling example:
try {
await kv.backup({ fileName: "./snapshots/run.kv" });
} catch (err) {
if (err?.name === "BackupInProgressError") {
console.log("Another VU is already writing a snapshot—safe to ignore.");
} else if (err?.name === "SnapshotPermissionError") {
fail(`Backup path is not writable: ${err.message}`);
} else {
throw err;
}
}Batch error details example:
try {
await kv.setMany({
"ok": { name: "Alice" },
"bad": () => {}, // JSON serializer rejects functions
});
} catch (err) {
if (err?.name === "InvalidOptionsError") {
for (const item of err.errors ?? []) {
console.log(item.key, item.name, item.message);
}
} else {
throw err;
}
}High-level categories:
- Options & inputs - typed guards such as
BackupOptionsRequiredError,ValueNumberRequiredError,UnsupportedValueTypeError. - Concurrency & lifecycle - e.g.
BackupInProgressError,RestoreInProgressError,StoreReadOnlyError,StoreClosedError. - Disk & snapshot IO - precise signals for path issues, permission problems, bbolt failures, or restore budget overruns.
📚 A complete catalogue with root causes and remediation tips lives in examples/README.md.
Full TypeScript support with IntelliSense and type safety! Copy the typescript/ folder to your project for a ready-to-use starter kit.
See typescript/README.md for complete setup instructions.
typescript/ is a local starter example project (template), not a published npm package.
Opens a key-value store. Must be called in the init context (outside of default/setup/teardown functions). The returned KV handle is shared across VUs, so treat its lifecycle as test-wide.
⚠️ Configuration lock-in: the first successfulopenKv()call fixes the store configuration for the whole test run. All subsequentopenKv()calls must provide equivalent options; otherwiseopenKv()throwsKVOptionsConflictError. Callingclose()does not reset this process-wide configuration.
interface OpenKvOptions {
backend?: "memory" | "disk" // default: "disk"
path?: string // default: "./.k6.kv" (disk only)
serialization?: "json" | "string" // default: "json"
trackKeys?: boolean // default: false
memory?: {
shardCount?: number // default: 0 (auto-detect, memory only)
// when omitted: defaults are applied
}
disk?: {
timeout?: number | string // wait for file lock; number=ms, string=Go duration (e.g. "1s"); default 1s
noSync?: boolean // disable fsync on commit; default false
noGrowSync?: boolean // skip fsync on growth; default false
noFreelistSync?: boolean // rebuild freelist on open; default false
preLoadFreelist?: boolean // load freelist into memory; default false
freelistType?: "" | "array" | "map" // freelist representation; default "array"
readOnly?: boolean // open DB read-only; mutating APIs reject with StoreReadOnlyError (requires pre-existing DB/bucket); default false
initialMmapSize?: number | string // initial mmap size; number=bytes, string supports SI ("MB") and IEC ("MiB"); 0 keeps default/no preallocation (default)
mlock?: boolean // mlock pages (UNIX); default false
// when omitted: bbolt defaults are applied
}
metrics?: {
operations?: boolean // default: false
}
}Options:
backend:"memory"(ephemeral, fastest) or"disk"(persistent bbolt)serialization:"json"(structured) or"string"(string/raw bytes)trackKeys: Enable in-memory key indexing for fasterrandomKey()/randomKeys()selection (see Performance & Complexity)path: (Disk only) Override bbolt file locationmemory.shardCount: (Memory only) Number of shards for concurrent performance. If<= 0or omitted, defaults toruntime.NumCPU()(automatic, recommended). If> 65536, automatically capped at 65536. Ignored by disk backend. Whenmemoryis omitted, defaults are applied.disk: (Disk only) Optional bbolt tuning. Whendiskis omitted, bbolt defaults apply (1s lock timeout, syncs enabled, array freelist, etc.).disk.readOnly: Requires the bbolt file (andk6bucket) to already exist; opening in read-only mode cannot create the bucket and will fail if the file is missing or empty.metrics.operations: Enables automatic per-method metrics (xk6_kv_operations_total,xk6_kv_operation_duration,xk6_kv_operation_failed,xk6_kv_errors_total,xk6_kv_empty_result).
Note: With serialization: "string", string values are stored as-is. Non-string values are converted with Go fmt %v formatting (for example, an object can become map[a:1]). This mode is not JSON and is not intended for structured value round-trips; use serialization: "json" for objects/arrays.
const kv = openKv({ serialization: 'string' });
await kv.set('x', { a: 1 });
console.log(await kv.get('x')); // Go-style string, not { a: 1 }
⚠️ Snapshot path sharing: If you omitbackup().fileNameorrestore().fileName, the memory backend deliberately falls back to the same.k6.kvfile the disk backend uses. This lets you run ultra-fast tests withbackend: "memory"and then immediately replay the generated dataset viabackend: "disk"without touching paths. If you don't want that coupling (for example, you run disk workloads concurrently), always pass an explicitfileName.
Memory Backend Sharding:
The memory backend shards data across multiple internal partitions to improve concurrent performance by reducing lock contention:
- Automatic (recommended): Set
memory.shardCount: 0(or omit) to auto-detect based on CPU count (e.g., 32 shards on a 32-core system) - Manual: Set
memory.shardCountto a specific value (1-65536) for fine-tuned control - Performance: On high-core systems sharding delivers:
- 3.5x faster
set()operations - 2x faster
get()operations
- 3.5x faster
- How it works: Keys are distributed across shards using a hash function, allowing concurrent operations on different shards to proceed in parallel
- Maximum: Shard count is capped at 65536 (2^16) to provide excellent hash distribution while keeping memory overhead minimal (~5MB for empty shard structures)
- Memory-only: Sharding applies only to the
"memory"backend; disk backend uses bbolt's transaction-based concurrency
All methods return Promises except close().
get(key: string): Promise<any>- Retrieves a value by key. Throws if key doesn't exist.getMany(keys: string[]): Promise<Array<{ key: string, exists: boolean, value: any | null }>>- Reads many keys in one logical batch and preserves input order. Missing keys return{ exists: false, value: null }; stored JSONnullreturns{ exists: true, value: null }; duplicate keys are allowed. Empty-string keys are accepted for reads and resolve like missing keys in normal public API usage.getMany()backend note: disk reads run inside one bbolt read transaction; memory reads use per-shard locks and do not provide cross-key snapshot isolation under concurrent writes. Mutating APIs (set,setMany,delete,deleteMany) reject empty-string keys.set(key: string, value: any): Promise<any>- Sets a key-value pair. Empty-string keys are rejected.setMany(entries: Record<string, any>): Promise<{ written: number }>- Writes an object map in one logical batch. Keys must be non-empty strings. Validates the input shape and serializes all values before writing; rejects witherr.errorsand writes nothing if any entry fails.setMany()provides all-or-nothing validation/serialization semantics, but is not intended to provide cross-key snapshot isolation for concurrent readers on the memory backend.deleteMany(keys: string[]): Promise<{ deleted: number, missing: number }>- Deletes an explicit list of non-empty keys. Missing keys are not errors and are counted inmissing; duplicate keys are processed in input order. Rejects invalid input before deleting anything.deleteByPrefix(options: { prefix: string, limit: number }): Promise<{ deleted: number, done: boolean }>- Deletes up tolimitkeys matching a non-empty prefix. This operation is destructive and bounded by explicit limit.delete(key: string): Promise<boolean>- Removes a key-value pair (always resolves totrue).exists(key: string): Promise<boolean>- Checks if a key exists.clear(): Promise<boolean>- Removes all entries (always resolves totrue).clear()is destructive and best used in setup/teardown or maintenance windows. Under concurrent writers, new keys can appear immediately after the wipe completes. WithtrackKeys: true, in-memory indexes are reset together with store state.size(): Promise<number>- Returns current store size (number of keys).
getMany() example:
const items = await kv.getMany(["user:1", "user:missing", "user:null"]);
// items[0] -> { key: "user:1", exists: true, value: ... }
// items[1] -> { key: "user:missing", exists: false, value: null }
// items[2] -> { key: "user:null", exists: true, value: null }deleteMany() example:
const result = await kv.deleteMany(["user:1", "user:2", "user:missing"]);
// result -> { deleted: 2, missing: 1 }deleteByPrefix() example:
const result = await kv.deleteByPrefix({
prefix: "tmp:",
limit: 1000,
});
// result -> { deleted: <number>, done: <boolean> }deleteByPrefix() details:
prefixis required and must be a non-empty string.limitis required, must be a positive integer, and is capped at100000.done === truemeans no matching keys remain after this call.- If
done === false, repeat the same call until completion. - Use
listKeys({ prefix, limit })first for a read-only preview.
Backend note:
- disk backend deletes inside one bbolt write transaction;
- memory backend deletes through shard locks and does not provide cross-key visibility transaction under concurrent writes;
- claim metadata is removed for physically deleted keys.
incrementBy(key: string, delta: number): Promise<number>- Atomically increments numeric value. Treats missing keys as0.JS numbers are IEEE‑754 doubles, so anything above
Number.MAX_SAFE_INTEGER(~9e15) loses precision before the value reaches Go. Keep counters below that threshold or encode larger values as strings/custom objects in your script before callingincrementBy.getOrSet(key: string, value: any): Promise<{ value: any, loaded: boolean }>- Gets existing value or sets if absent.loaded: truemeans pre-existing.swap(key: string, value: any): Promise<{ previous: any|null, loaded: boolean }>- Replaces value atomically. Returns previous value if existed.compareAndSwap(key: string, oldValue: any, newValue: any): Promise<boolean>- SetsnewValueonly if current value equalsoldValue. Passnull/undefinedasoldValueto mean "only if the key is absent" (set-if-not-exists).compareAndSwapDetailed(key: string, oldValue: any, newValue: any, options?: { includeCurrentOnMismatch?: boolean }): Promise<{ swapped: true, reason: "swapped" } | { swapped: false, reason: "mismatch", existed: boolean, current?: any }>- Detailed CAS diagnostics. On mismatch, includesexistedand optionallycurrent(only whenincludeCurrentOnMismatch: trueand key existed).setIfAbsent(key: string, value: any): Promise<boolean>- Convenience API for first-writer-wins key initialization. Equivalent tocompareAndSwap(key, null, value).deleteIfExists(key: string): Promise<boolean>- Deletes key if it exists. Returnstrueif deleted.compareAndDelete(key: string, oldValue: any): Promise<boolean>- Deletes key only if current value equalsoldValue.compareAndDeleteDetailed(key: string, oldValue: any, options?: { includeCurrentOnMismatch?: boolean }): Promise<{ deleted: true, reason: "deleted" } | { deleted: false, reason: "mismatch", existed: boolean, current?: any }>- Detailed compare-and-delete diagnostics.oldValue: null/undefinedis treated as regular expected-value comparison through the configured serializer (not an absent-key sentinel).
compareAndSwap()andcompareAndDelete()behavior and return types are unchanged.compareAndSwapDetailed()/compareAndDeleteDetailed()are additive opt-in APIs for richer mismatch diagnostics.setIfAbsent()is a convenience API over absent-key CAS semantics; existing APIs are not redefined.
-
scan(options?: ScanOptions): Promise<ScanResult>- Streams entries in lexicographic order using cursor-based pagination.interface ScanOptions { prefix?: string; // Filter by key prefix limit?: number; // Max results per page; positive values are capped at 100000 cursor?: string; // Opaque cursor produced by the previous page ("" starts a new scan) } interface ScanResult { entries: Array<{ key: string; value: any }>; cursor: string; // Opaque cursor for the next page done: boolean; // True when the scan reached the end of the prefix window }
Use
scan()with a bounded positivelimitwhen the keyspace is too large to materialize withlist()or when you need restart-safe pagination. Treatcursoras an opaque continuation token. Do not parse, modify, or construct it manually. Use a cursor only with the same logical scan options that produced it, especially the sameprefix. Pagination is cursor-based, but it is not a long-lived snapshot. If keys are inserted or deleted between page calls, later pages may reflect those changes. On disk backend, eachscan()call is a separate bbolt read transaction. On memory backend, concurrent writes can also affect what you observe during scan/list iteration. -
scanKeys(options?: ScanKeysOptions): Promise<ScanKeysResult>- Streams key names in lexicographic order using cursor-based pagination.interface ScanKeysOptions { prefix?: string; // Filter by key prefix limit?: number; // Max keys per page; positive values are capped at 100000 cursor?: string; // Opaque cursor produced by the previous page ("" starts a new scan) } interface ScanKeysResult { keys: string[]; cursor: string; // Opaque cursor for the next page done: boolean; // True when the scan reached the end of the prefix window }
scanKeys()is the key-only equivalent ofscan(). It does not clone, deserialize, or return values. PreferscanKeys()with a bounded positivelimitfor large stores or load-test paths. Avoidlimit <= 0on large keyspaces unless you intentionally want to materialize every matching key in one VU. Treatcursoras an opaque continuation token. Do not parse, modify, or construct it manually. Use a cursor only with the same logical scan options that produced it, especially the sameprefix. Pagination is cursor-based, but it is not a long-lived snapshot. If keys are inserted or deleted between page calls, later pages may reflect those changes. For exclusive allocation workflows, useclaimRandom()orpopRandom()instead of scan/list pagination.let cursor = ""; do { const page = await kv.scanKeys({ prefix: "user:", cursor, limit: 1000, }); const users = await kv.getMany(page.keys); cursor = page.cursor; } while (!page.done);
-
list(options?: ListOptions): Promise<Array<{ key: string; value: any }>>- Returns entries sorted lexicographically by key.interface ListOptions { prefix?: string // Filter by key prefix limit?: number // Max results; positive values are capped at 250000 }
-
listKeys(options?: ListKeysOptions): Promise<string[]>- Lists key names without returning values.interface ListKeysOptions { prefix?: string // Optional key prefix filter limit?: number // Max keys; positive values are capped at 250000 }
listKeys()is read-only, returns keys in ascending lexicographic order, and does not clone, deserialize, or return values. Use it for small datasets, debugging, setup validation, and bounded previews. For large stores, preferscanKeys({ limit })so each VU only materializes one page at a time. Useful flow before destructive calls:const keys = await kv.listKeys({ prefix: "tmp:", limit: 1000 }); await kv.deleteMany(keys);
Backend note:
- disk backend reads key names from bbolt cursors so results reflect the durable store;
- memory backend uses key-only shard iterators and merges them lexicographically;
listKeys()is not cursor-paginated; usescanKeys()for key-only cursor pagination;- use
scan()when you need key/value entries.
-
count(options?: CountOptions): Promise<number>- Returns number of keys matching prefix.
count()(or omitted options) is equivalent tosize().interface CountOptions { prefix?: string // Filter by key prefix }
-
randomKey(options?: RandomKeyOptions): Promise<string>- Returns a random key, or""if none match.interface RandomKeyOptions { prefix?: string // Optional prefix filter }
randomKey()only returns a key string and does not create/observe leases. No-match is not an error, but the promise may reject for invalid options, closed stores, backend I/O errors, or other technical failures. -
randomKeys(options: RandomKeysOptions): Promise<string[]>- Returns random key names matching an optional prefix.interface RandomKeysOptions { prefix?: string // Optional prefix filter count: number // Required integer in range [1, 1000000] unique?: boolean // Defaults to true }
const keys = await kv.randomKeys({ prefix: "user:", count: 100, unique: true, }); const users = await kv.getMany(keys);
randomKeys()returns keys only. It does not clone, deserialize, or return values.countis capped at1000000to protect the k6 process from unbounded allocations. Whenuniqueistrueand fewer matching keys exist than requested, all available matching keys are returned in random order. UseclaimRandom()orpopRandom()when you need exclusive allocation. -
popRandom(options?: { prefix?: string }): Promise<{ key: string, value: any } | null>- Atomically picks and removes one random matching entry. Resolves tonullwhen no match exists. -
claimRandom(options?: { prefix?: string, owner?: string, ttl?: number }): Promise<{ id: string, key: string, token: number, owner?: string, expiresAt: number, entry: { key: string, value: any } } | null>- Atomically leases one random matching entry. Live claims are excluded from laterclaimRandom()andpopRandom()calls until released/completed or expired. Ifttlis omitted, the default lease is 30000ms (30 seconds).ttlmust be a positive integer and is capped at 86400000ms (24 hours).owneris optional diagnostic metadata capped at 256 bytes and is not emitted as a metrics label. -
releaseClaim(claim: { id: string, key: string, token: number }): Promise<boolean>- Releases a live claim back to the pool. Returnsfalsefor stale/expired/missing claims. -
completeClaim(claim, options?: { deleteKey?: boolean }): Promise<boolean>- Completes a live claim. By default it also deletes the underlying key (deleteKey: true).Claim lifecycle guidance:
- Use
completeClaim()on success (deleteKey: trueconsumes the item permanently). - Use
releaseClaim()on failure to return the item to the pool.
const claim = await kv.claimRandom({ prefix: 'user:' }); if (claim === null) return; try { // do work with claim.entry.value await kv.completeClaim(claim); // success path } catch (err) { await kv.releaseClaim(claim); // failure path throw err; }
- Use
⚠️ claimRandom()is a local coordination primitive for VUs sharing the samexk6-kvprocess/store. It is not a distributed lock service.claimRandom()andpopRandom()are random allocation helpers optimized for low/moderate live-claim occupancy. Under very high live-claim occupancy, fallback selection can be biased toward scan order, but exclusivity is still preserved.claim.tokenis exposed as a JavaScript number. It should not approachNumber.MAX_SAFE_INTEGERin practical k6 runs; if that ever becomes realistic, a future major API should expose it as a string.
-
rebuildKeyList(): Promise<boolean>- Rebuilds in-memory key indexes (useful for disk backend withtrackKeys: true). -
stats(): Promise<KVStats>- Returns a structured diagnostic snapshot of the current store state.interface KVStats { backend: "memory" | "disk" serialization: "json" | "string" trackKeys: boolean count: number claims: { live: number; expired: number } index?: { enabled: boolean keysList?: number keysMap?: number ost?: number consistent: boolean } | null disk?: { path: string sizeBytes: number readOnly?: boolean } | null }
const snapshot = await kv.stats(); console.log(snapshot.count, snapshot.claims.live);
-
reportStats(): Promise<void>- Emits state gauges to k6 custom metrics using the current snapshot.Emitted metrics:
xk6_kv_keysxk6_kv_claims_livexk6_kv_claims_expiredxk6_kv_index_keys(withindex=keys_list|keys_map|ost)xk6_kv_index_consistentxk6_kv_disk_size_bytes(disk backend only)
await kv.reportStats();
-
metrics.operations(openKv option) - Enables automatic operation metrics for every async KV method except syncclose().Emitted metrics:
xk6_kv_operations_total(Counter, tags:op,backend,status,track_keys,serialization)xk6_kv_operation_duration(Trend in milliseconds, tags:op,backend,status,track_keys,serialization)xk6_kv_operation_failed(Rate, tags:op,backend,track_keys,serialization)xk6_kv_errors_total(Counter, tags:op,backend,error_type,track_keys,serialization)xk6_kv_empty_result(Rate forrandom_key/random_keys/pop_random/claim_random, tags:op,backend,track_keys,serialization)xk6_kv_async_in_flight(Gauge for async store operations currently running, tags:backend,track_keys,serialization)
xk6_kv_async_in_flightis the current saturation signal for background store operations in the async bridge. It is decremented when the store goroutine queues its event-loop completion callback, so it is not a count of unresolved JavaScript promises. There is nowaitingmetric because xk6-kv does not currently have an async limiter or queue; if one is added later, a low-cardinality waiting gauge can be added alongside it.const kv = openKv({ backend: "memory", metrics: { operations: true }, });
Quickstart:
import { openKv } from "k6/x/kv"; const kv = openKv({ backend: "memory", trackKeys: true, metrics: { operations: true }, }); export const options = { thresholds: { "xk6_kv_operation_failed": ["rate==0"], "xk6_kv_empty_result{op:claim_random}": ["rate<0.05"], "xk6_kv_async_in_flight": ["value<1000"], "xk6_kv_index_consistent{track_keys:true}": ["value==1"], }, }; export default async function () { await kv.claimRandom({ prefix: "user:" }); await kv.reportStats(); }
Troubleshooting: if k6 reports
no metric name "xk6_kv_..." found, make sure:openKv({ metrics: { operations: true } })runs in init context, and- you rebuilt your binary with this extension (
task build-k6).
-
backup(options?: BackupOptions): Promise<BackupSummary>
Writes the current dataset to a bbolt file. Always setfileName(leaving it blank points at the backend’s live bbolt file) and useallowConcurrentWrites: truefor a best-effort dump that releases writers sooner (summary includesbestEffort+warningso you can alarm on it).Memory backend caution: when
allowConcurrentWritesis left at the defaultfalse, the memory backend holds its global mutation gate for the entire duration of the backup (from key snapshot through streaming). On large datasets that can pause every writer/VU for minutes. EnableallowConcurrentWrites: trueif you need the cluster to keep serving traffic during the export (accepting the best-effort snapshot) or schedule strict backups during quiet windows.Shared-file workflow: Leaving
fileNameblank while running the memory backend is intentional—it writes into.k6.kv, the same file the disk backend mounts by default. That makes a common DX pattern possible: run the hot path withbackend: "memory", callbackup()without arguments inteardown(), and later rerun the same test withbackend: "disk"to replay the captured dataset. If you want snapshots to live somewhere else (or you run disk workloads in parallel), provide an explicitfileNameso you don’t clobber the shared DB.File replacement semantics: backup/export writes to a temp file in the destination directory, fsyncs, then renames into place and syncs the parent directory. On Unix-like filesystems, same-directory rename is atomic. On platforms where
os.Renameis not guaranteed atomic, treat this as a best-effort crash-safety strategy; it still avoids intentionally writing partial data directly into the destination file. -
restore(options?: RestoreOptions): Promise<RestoreSummary>
Replaces the dataset with a snapshot produced bybackup(). OptionalmaxEntries/maxBytesguards protect against oversized or corrupted inputs. -
exportJSONL(options: { fileName: string, prefix?: string, limit?: number }): Promise<{ exported: number, fileName: string, bytesWritten: number }>
Exports key/value entries to a JSON Lines file for portable seed data and diff-friendly snapshots. Each line is:{"key":"...","value":...}. Values are exported after normal KV deserialization, not as raw backend bytes. -
importJSONL(options: { fileName: string, limit?: number, batchSize?: number }): Promise<{ imported: number, fileName: string, bytesRead: number }>
Imports key/value entries from a JSON Lines file. Each line must be:{"key":"...","value":...}.
await kv.backup({
fileName: "./backups/kv-latest.kv",
allowConcurrentWrites: true,
});
await kv.restore({ fileName: "./backups/kv-latest.kv" });Disk backend note: pointing
fileNameat the currently mounted bbolt path is treated as a no-op (backup just returns metadata; restore leaves the DB untouched), so when you’re already runningbackend: "disk"you still need a differentfileName. The “shared.k6.kvtrick” only applies when you begin on the memory backend and want to seed the disk backend later.
exportJSONL() example:
const result = await kv.exportJSONL({
fileName: "./exports/users.jsonl",
prefix: "user:",
});
console.log(result.exported);
console.log(result.bytesWritten);exportJSONL() options:
fileNameis required and must be a non-empty string.prefixis optional; empty or omitted means all keys.limitis optional; if omitted or<= 0, all matching entries are exported. Positive values are capped at1000000.
exportJSONL() writes to a temporary file, flushes and fsyncs it, renames it into place, then syncs the parent directory. On Unix-like filesystems this gives atomic replacement in the common same-directory case. On platforms where os.Rename is not guaranteed atomic, this remains a best-effort crash-safety strategy and still avoids intentionally writing partial data directly into the target file.
importJSONL() example:
const result = await kv.importJSONL({
fileName: "./exports/users.jsonl",
batchSize: 1000,
});
console.log(result.imported);
console.log(result.bytesRead);importJSONL() options:
fileNameis required and must be a non-empty string.limitis optional; if omitted or<= 0, all records are imported. Positive values are capped at1000000.batchSizeis optional; if omitted or<= 0, the default batch size is used. Positive values are capped at10000.importJSONL()enforces a per-line safety cap of 64 MiB; oversized records are rejected with a line-numbered parse error.
importJSONL() streams records and writes them in setMany() batches. Existing keys are overwritten.
bytesRead reports how many input bytes were consumed by the importer.
Each line must be a JSON object with:
key- required non-empty string.value- required JSON value, includingnull.
importJSONL() rejects blank lines and malformed JSON records. Errors include the source line number.
The import is batch-atomic, not file-atomic: if a later line is invalid, already committed batches remain imported, while the currently failed batch is not partially written. Failure messages include committed progress (records, bytes, and line context) so callers can locate the bad record and decide whether a retry is safe.
close(): void- Synchronously closes this KV handle. Call once inteardown(). Afterclose(), this handle rejects async operations withStoreClosedErroron both backends. Closing one handle does not affect other open handles until the shared store refcount reaches zero. It also does not allow lateropenKv()calls to switch backend, path, serialization, or key-tracking options. Do not callclose()fromdefault()iterations.
randomKey()complexity by backend:trackKeys: true:- disk backend: no prefix -> O(1); prefix -> O(log n) via key index.
- memory backend: no prefix includes a small O(shards) selection step; prefix performs per-shard range selection (O(shards * log n)) before final rank/select.
trackKeys: false(default): scan-based path with linear cost in the matching keyspace. Achieving tracked-path speeds means keys are mirrored in memory helper structures, so large datasets consume more RAM and index slices/maps do not shrink automatically. Budget for that footprint or rebuild indexes periodically.
- Random key workloads: Calling
randomKey()repeatedly withtrackKeys: false(especially on the disk backend) keeps a read transaction open while it counts and selects keys, which can stall the lone bbolt writer until the call finishes. Turn ontrackKeys(for O(1)/O(log n) sampling) or throttle/redesign these workloads to avoid head-of-line blocking. randomKeys()complexity by backend: WithtrackKeys: true, both backends use key indexes for small samples; memory first builds shard ranges (O(shards * log n)) and then selects sampled keys by rank, while disk may fall back to cursor scan for near-full unique samples. WithtrackKeys: false, it collects candidates viascanKeys()and samples in memory (linear in matching keys).- Memory
trackKeys: falsescan/list costs:scan(),scanKeys(),list(), andlistKeys()use untracked shard-map iteration. On large keyspaces, repeated pagination can become expensive; if these operations are hot, prefertrackKeys: true. - Disk backend and
trackKeys: bbolt is the persistent source of truth. WithtrackKeys: true, the disk backend maintains an exact derived in-memory key index rebuilt from bbolt on open/restore and updated after successful mutations. That index acceleratesrandomKey(),randomKeys(), andcount(), while cursor-style key reads (scanKeys()andlistKeys()) still read from bbolt. - Disk claim allocation:
claimRandom()andpopRandom()allocate inside a bbolt write transaction so lease metadata and key deletion stay consistent; they do not use the derived key index for candidate selection. - Bound large reads and writes: For large keyspaces, prefer
scan()/scanKeys()with bounded positivelimitvalues instead oflist()/listKeys()or unlimitedlimit <= 0calls. Oversized positive limits are rejected to avoid unbounded allocations and long transactions. count()/count({ prefix })complexity:count()(same assize()):- memory backend: O(shards)
- disk backend with
trackKeys: true: O(1) - disk backend with
trackKeys: false: bbolt bucket stats (not guaranteed constant-time)
count({ prefix }):- memory +
trackKeys: true: O(shards * log n) - memory +
trackKeys: false: O(n) - disk +
trackKeys: true: O(log n) - disk +
trackKeys: false: O(k), wherekis number of keys under prefix
- memory +
- Prefix cardinality on large disk datasets: If
count({ prefix })is hot on disk, prefertrackKeys: true. Without tracking, the disk path walks a bbolt cursor through matching keys. - Both backends are optimized for concurrent workloads, but there's synchronization overhead between VUs
Complete examples are available in the examples/ directory, and production-grade k6 scenarios live under e2e/.
Observability-focused scripts:
- Example operation metrics in a worker queue:
examples/metrics-operations-worker-queue.js - Example
stats()/reportStats()health snapshots:examples/metrics-report-stats-health.js - Example batch random key sampling + hydration:
examples/random-keys.js - E2E lease-worker observability scenario:
e2e/subscription-renewal-lease-observability.js - E2E credential pool drain scenario:
e2e/credential-pool-drain-observability.js
import { openKv } from "k6/x/kv";
const kv = openKv({ backend: "memory", trackKeys: true });
export async function producer() {
const id = (await kv.get("latest-id")) || 0;
await kv.set(`token-${id}`, "value");
await kv.set("latest-id", id + 1);
}
export async function consumer() {
const key = await kv.randomKey({ prefix: "token-" });
if (key) {
await kv.get(key);
await kv.delete(key);
}
}import { openKv } from "k6/x/kv";
const kv = openKv({ backend: "disk", trackKeys: true });
export default async function () {
await kv.set("user:1", { name: "Alice" });
await kv.set("user:2", { name: "Bob" });
const key = await kv.randomKey({ prefix: "user:" });
if (key) {
const user = await kv.get(key);
console.log(`Random user: ${JSON.stringify(user)}`);
}
}This project uses a Taskfile for cross-platform development (Windows, Linux, macOS, WSL).
# Install Task (once)
# macOS/Linux: brew install go-task/tap/go-task
# Windows: scoop install task
# Install dev tools
task install-tools
# Format and lint
task lint-fix
# Run tests
task test
# Build k6 with local extension
task build-k6| Task | Description |
|---|---|
task |
Show common tasks |
task install-githooks |
Configure Git hooks for this repository |
task remove-githooks |
Disable Git hooks for this repository |
task fetch-tags |
Fetch tags from origin |
task version-check |
Check what the next version would be based on commits |
task pin-tool-versions |
Pin CI/local tool versions to current module releases |
task install-lint |
Install golangci-lint into ./bin |
task install-xk6 |
Install xk6 into ./bin |
task install-tools |
Install golangci-lint and xk6 |
task lint-fix |
Run golangci-lint with --fix |
task lint |
Run golangci-lint |
task test |
Run unit tests |
task test-race |
Run unit tests with race detector |
task build-k6 |
Build k6 with local xk6-kv and xk6-file extensions into ./bin/k6 |
task test-e2e-all |
Run all E2E tests sequentially (all backends + key tracking combinations) |
task test-e2e-single E2E_SCENARIO=tenant-prefix-count-window |
Run single E2E scenario (all backend + key tracking combinations) |
task clean |
Remove ./bin and .k6.kv |
E2E task overrides: set VUS and ITERATIONS to change load shape.
- gofumpt for formatting (stricter than gofmt)
- golangci-lint with 40+ enabled linters
- gci and goimports for import organization
- golines for line length (max 120 characters)
Run task lint-fix before committing.
This project uses automated semantic versioning based on commit messages:
fix: description-> Patch version (1.0.0 -> 1.0.1)feat: description-> Minor version (1.0.0 -> 1.1.0)major: description-> Major version (1.0.0 -> 2.0.0)
When commits are pushed to main:
- Commit messages are validated
- Tests and builds run across platforms
- Version is calculated and release is created automatically
- Pre-built k6 binaries are attached to the GitHub release
Enable semantic commit validation:
task install-githooksCheck next version:
task version-checkContributions are welcome! Please:
-
Fork the repository
-
Create a feature branch (
git checkout -b feature/AmazingFeature) -
Set up git hooks:
task install-githooks -
Make changes and commit using semantic format:
fix: your bug fix descriptionfeat: your new feature descriptionmajor: your breaking change description
-
Ensure code passes linting (
task lint) and tests (task test) -
Push and open a Pull Request
Note: Pull requests from forks skip semantic commit validation for easier external contributions.
GNU AGPL v3.0