All notable changes to LightSpeed will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
0.4.0 — 2026-04-27
Native Windows GUI client: system-tray icon + egui status window.
client-gui/— newlightspeed-guiworkspace crate (Windows-only binary).client/src/engine.rs—LightSpeedEngine/EngineStatus: background async keepalive loop driven from a non-Tokio GUI thread via aHandle. Sends keepalives every 5 s, measures RTT, maintains rolling 120-sample history.client/src/lib.rs—lightspeed_clientlibrary target; re-exportsLightSpeedEngineandEngineStatusfor GUI consumption.client-gui/src/main.rs— builds a dedicated multi-thread Tokio runtime, auto-connects to the LAX proxy, launches eframe. On non-Windows platforms prints a message and exits 1.client-gui/src/app.rs—LightSpeedAppimplementseframe::App:- Yellow 16×16 tray icon via
tray-icon 0.17; menu: Show / Connect / Disconnect / Quit. - Double-click tray icon → restore window; close button → hide to tray.
- RTT sparkline (last 120 keepalives) via
egui_plot; traffic-light colour (green < 60 ms, yellow < 120 ms, red ≥ 120 ms). - Connect dialog with editable proxy
ip:portfield. - Repaints at 1 Hz to avoid CPU spin.
- Yellow 16×16 tray icon via
Cargo.toml(workspace) — added"client-gui"member..github/workflows/ci.yml— all Linux/macOScargocommands now carry--exclude lightspeed-gui(tray-icon requires native display libraries absent on headless CI runners); macOS smoke-test job likewise excluded.
Anonymous, aggregated network-quality reporting — off by default, enabled
with --telemetry. No PII is ever transmitted. See docs/privacy.md.
protocol/src/telemetry.rs— newTelemetryReportstruct (serde JSON):game_id,client_country(OS locale only),p50_ms,p95_ms,p99_ms,jitter_ms,sample_count,fec_recoveries,fec_losses,client_version. Includesvalidate()and a compile-time PII regression test (test_no_pii_fields_in_json) that will fail if an IP, user-id, session-token, or other identifying field is accidentally added to the JSON output.client/src/telemetry.rs—TelemetryCollector(ring-buffer, 1024 RTT samples).record_rtt(),record_fec_recovery(),record_fec_loss().flush()— hand-rolled HTTP/1.0 POST over Tokio TCP with a 5 s timeout, best- effort (errors silently swallowed).spawn_periodic_flush()— Tokio task that fires every 15 minutes.print_disclosure()— ASCII banner printed on first enable.proxy/src/health.rs— newPOST /telemetryhandler on:8080(same HTTP server as/healthand/metrics). Body size-capped at 2 048 bytes; deserialises and validates report; callsmetrics.record_telemetry_report().proxy/src/metrics.rs—telemetry_reports_totalcounter + Prometheus output line (lightspeed_telemetry_reports_total).client/src/cli.rs—--telemetry/--no-telemetryflags.client/src/main.rs— createsTelemetryCollectorwhen--telemetryis set, spawns periodic flush task, passes collector into keepalive mode for per- echo RTT recording, performs a final flush on Ctrl+C shutdown.client/src/modes/keepalive.rs— acceptsOption<Arc<TelemetryCollector>>; callstc.record_rtt(latency_ms)on each keepalive echo; final flush on exit.docs/privacy.md— full privacy disclosure covering schema, what is NOT collected, data flow, and how to audit the implementation.
Both Vultr nodes updated to v0.4.0-dev code in a coordinated cutover (CI run 24981406005, 1m37s):
- proxy-lax (us-west-lax): restarted, ✅ healthy at uptime 44s
- relay-sgp (asia-sgp): restarted, ✅ healthy at uptime 31s
⚠️ FEC parity wire-format break — both nodes now run compact parity format. Any client still using v0.3.x binaries that sends FEC will have parity silently ignored (data packets still relay fine; no crash). FEC recovery requires matching client build.
-
Item L:
FecEncoderzero-alloc API — newadd_packet_inplace(&[u8]) -> boolXORs directly into the encoder's internal parity buffer (noBytes::copy_from_slice, no intermediate allocation).emit_parity_to(&mut [u8]) -> usizewrites the parity block into a caller-supplied buffer.next_block()resets the block counter, closing the per-block lifecycle without any heap touch. Therelay.rsresponse-listener hot path updated to the three-phase zero-alloc protocol:add_packet_inplace→ accumulate →emit_parity_to→ write into task-stack send buffer. Eliminates K heap allocations per FEC block (wasBytes::copy_from_slice× K per block). Perf improvements (Criterion, release build):fec/encoder/add_packet K=2— 46% fasterfec/encoder/add_packet K=4— 52% fasterfec/encoder/add_packet K=8— 72% fasterfec/encoder/add_packet K=16— 85% faster- Full block K=4/256B: 207.7 ns → 88.3 ns (−57%)
- Full block K=8/256B: 464.8 ns → 111.3 ns (−76%)
- Relay outbound 64B: −33%, 256B: −25%, 512B: −30%, 1024B: −27%
- Header encode: −24%;
encode_to_array: −14%; decode: −12%
-
Item N:
BlockState.receivedfixed-size array — changedreceived: Vec<Option<Bytes>>toreceived: [Option<Bytes>; MAX_BLOCK_SIZE as usize](= 16 slots). Eliminates thevec![None; k]heap allocation on every new FEC block (fires once per K data packets at normal loss rates — ~1.5× per second at 1% loss). Known trade-off: K=4 FEC recovery cold path shows +14-30% regression (over-initialization of 16 slots for K=4 blocks; negligible in practice). -
Item O: CI coverage job — added
coveragejob to.github/workflows/ci.ymlusingcargo-llvm-cov --workspace --lcov. LCOV report uploaded as artifact with 14-day retention. -
Tests: 153 pass, 0 fail (+5 new FEC tests for the inplace encoder API)
⚠️ Breaking FEC parity wire-format change — seedocs/protocol.md§FEC Algorithm. Proxy and client must be on the same version. Mixed v0.3.x ↔ v0.4.0-dev deployments will silently misinterpret FEC parity packets and skip recovery. Deploy both nodes (proxy-lax, relay-sgp) and the client binary together.
TunnelHeader::encode_to_array()— new zero-alloc stack-based header encode returning[u8; 20]without any heap allocation.encode()now delegates to it; all 8 hot-path call sites inproxy/src/relay.rsandclient/src/updated to callencode_to_array()directly. Expected: ~7× faster (~5 ns vs ~38 ns per packet).- Compact FEC parity emission —
FecEncodernow emits(max_payload_len + 2)bytes instead of a fixed 1400 B buffer. Wire format:[XOR content (max_payload_len bytes)][lengths_xor (2 BE bytes)].FecDecoder::try_recoverupdated to read the compact format and recover the exact missing-packet length fromlengths_xor. Expected: ~10× smaller parity packets for typical 64–256 B game traffic. - FEC decoder ring buffer — replaced
HashMap<u16, BlockState>with a 64-slotVec<Option<BlockState>>indexed byblock_id % 64, eliminating per-packet hash overhead (~115 ns/packet). Blocks are implicitly evicted when their slot is reused. - Benchmark —
header/encode_to_arraygroup added toprotocol/benches/header_bench.rs.
- Valorant game profile —
ValorantConfiginclient/src/games/valorant.rs- Auto-detects
VALORANT-Win64-Shipping.exe, port range 7000–7500, Riot Vanguard anti-cheat - CLI:
--game valorant— no SDR, direct UDP to Riot servers game_id::VALORANT = 5inprotocol/src/control.rs
- Auto-detects
- Apex Legends game profile —
ApexConfiginclient/src/games/apex.rs- Auto-detects
r5apex.exe/r5apex_dx12.exe, port range 37000–37050, EAC anti-cheat - CLI:
--game apex(also:apexlegends,apex-legends) — direct UDP to EA servers game_id::APEX = 6inprotocol/src/control.rs
- Auto-detects
- Game-registry regression test —
test_all_registered_games_are_detectableingames/mod.rsALL_GAME_KEYSconst documents every CLI alias; fails CI if a new profile is added todetect_game()without also updating the constant. Prevents doc drift.
- Rust (Facepunch) game profile —
RustConfiginclient/src/games/rust.rs- Auto-detects
RustClient.exe, port range 28015–28017, EAC + Facepunch Anti-Hack - CLI:
--game rust(also accepts--game rustgame) - No Steam Datagram Relay — direct UDP, ideal for LightSpeed proxying
- Added
game_id::RUST = 4toprotocol/src/control.rs
- Auto-detects
capture/injector.rs:sendpacketAPI break onpcapv2.4.0 —&raw_packet→raw_packet(type now requiresBorrow<[u8]>not&Borrow<[u8]>)capture/injector.rs: Spuriousunused_mutwarning on localudpvecml/predict.rs:predict_route(ml feature path) now gracefully falls back to weighted heuristic whenmodel_bytesis empty (first run before model is trained), instead of returningErr
- Fixed unused-variable warning
sent→_sentinclient/src/main.rs:786 - Updated
proxy/src/main.rsdoc-comment: OCI → Vultr - Updated
docs/security-audit-mvp.mdthreat model: OCI → Vultr infrastructure - Added
infra/terraform/LEGACY-OCI.md— explains OCI decommission context - Updated
infra/README.md—infra/fly/andinfra/docker/labeled as not-pursued/legacy - Added
infra/terraform/versions.tf— LEGACY header with Vultr redirect - Added
.geminirules,.antigravityrules,.agents/workflows/wat-loop.mdto repo (AI tool config, same pattern as.clinerules) - Pruned stale git remote tracking refs (
origin/main,origin/redesign-2026— belonged to a previous site, no longer exist on GitHub)
- Overwatch 2 game profile —
Ow2Configinclient/src/games/ow2.rs- CLI:
--game ow2(aliases:overwatch2,overwatch-2,overwatch) - Ports: 3478–6250 (covers STUN/SIP/Battle.net/game-data range)
- Auto-detects
Overwatch.exe/Overwatch_retail.exe - Anti-cheat: Blizzard Warden (server-side — fully compatible with transparent UDP forwarding)
game_id::OVERWATCH2 = 7inprotocol/src/control.rs
- CLI:
- League of Legends game profile —
LolConfiginclient/src/games/lol.rs- CLI:
--game lol(aliases:leagueoflegends,league-of-legends,league) - Ports: 5000–5500 (direct UDP to Riot regional servers)
- Auto-detects
League of Legends.exe/LeagueOfLegends.exe - Anti-cheat: Riot Vanguard (kernel-mode, rolling out globally 2024+)
game_id::LOL = 8inprotocol/src/control.rs
- CLI:
- PUBG: Battlegrounds game profile —
PubgConfiginclient/src/games/pubg.rs- CLI:
--game pubg(alias:battlegrounds) - Ports: 7000–17999 (intra-region 7000–7999 + cross-region 17000–17999)
- Auto-detects
TslGame.exe/PUBG.exe - Anti-cheat: BattlEye (kernel-mode — transparent UDP forwarding compatible)
game_id::PUBG = 9inprotocol/src/control.rs
- CLI:
- README Supported Games table — expanded from 6 to 9 games; now includes CLI flag,
auto-detect process name, and anti-cheat columns.
ALL_GAME_KEYSdrift-guard expanded to 22 entries covering all canonical keys + aliases. - macOS CI smoke test job — new
macos-smokejob in.github/workflows/ci.ymlruns-on: macos-latest,cargo build --release --workspace+cargo test --workspace --lib- Catches macOS-specific compilation failures (darwin syscalls, target-arch differences)
- Runs in parallel with the existing
ubuntu-latestcheck job
recvmmsgbatched inbound loop (proxy/src/relay.rs, Linux only) — drains up to 32 UDP datagrams perrecvmmsg(2)syscall instead of onerecv_fromper packet. Expected: ~5–10× pps improvement per vCPU at sustained packet rates (eliminates the dominant per-packet syscall overhead). Non-Linux path unchanged (recv_fromfallback).BatchState— 64 KiB heap-allocated kernel-facing slab (32 × 2048 Breceive buffers +mmsghdr/sockaddr_inarrays). Iovecs are rebuilt on the stack insidedo_recvon every call so the struct is never self-referential and does not requirePin.recv_batch_async— usestokio::net::UdpSocket::try_io(Interest::READABLE, …)to correctly arm/disarm Tokio's epoll interest bit. OnWouldBlockthe readiness flag is cleared soreadable().awaitgenuinely blocks instead of spinning.process_inbound_packetextracted — per-packet hot-path logic shared between the Linux batch loop and the non-Linux single-recv loop; zero code duplication across platforms.- New metrics —
inbound_batches_total+inbound_packets_receivedcounters inProxyMetrics; Grafana average batch size =received / batches.record_inbound_batch(n)called after every syscall (both paths) for consistent telemetry. - Linux unit test —
test_linux_batch_recv_collects_packetssends 10 packets to a loopback socket, drains viarecv_batch_async, asserts all 10 received with correct 64-byte lengths. libc = "0.2"added as a Linux-only target dependency inproxy/Cargo.toml.
- US-East / EU-West mesh expansion
- Discord community server
- v1.0.0 public stable release
0.3.0 — 2026-03-20
First fully load-tested, monitoring-equipped release. Both proxy nodes validated at 0.00% packet loss under sustained load. Pre-built binaries now ship with every release via automated CI/CD.
- Online learning wired into main.rs — keepalive probe RTTs now feed into
OnlineLearnerduring both keepalive mode and capture mode, with automatic model retraining and cross-session persistence to~/.lightspeed/measurements.json
- Enhanced Prometheus metrics — 20+ metrics including latency histograms (11 buckets),
FEC recovery counters, auth/abuse/rate-limit security metrics, session lifecycle tracking,
build info, and uptime gauges. All exported with
region+node_idlabels. - Route-aware health server —
/healthreturns JSON,/metricsreturns Prometheus exposition format. Proper HTTP routing with 404 for unknown paths. - Prometheus config — scrape targets for both Vultr nodes (proxy-lax + relay-sgp), 10s scrape interval, 30d retention.
- Alerting rules — 10 alert rules across 5 groups: node health (down/restart), latency (warning at 100ms, critical at 500ms), capacity (connections, drops, no traffic), security (auth rejections, abuse, rate limits), and FEC health.
- Pre-built Grafana dashboard — 6 sections (Overview, Traffic, Latency, FEC, Security, Sessions) with 20 panels including stat, timeseries, and histogram visualizations. Auto-provisioned on startup.
- Docker Compose monitoring stack — one-command
docker compose up -ddeploys Prometheus + Grafana with persistent volumes, health checks, and auto-provisioning. - Enhanced mesh-health.sh — built-in node list,
--metricsflag for Prometheus output,--jsonflag for machine-readable output, FEC recovery display. - Load testing tool (
tools/load_test.py) — multi-client concurrent UDP stress test with ramp-up, per-node and--all-nodesmodes, latency percentiles (p50/p95/p99), packet loss measurement, pre/post health checks, and JSON export. - Vultr deploy script (
infra/scripts/deploy-vultr.sh) — cross-compile, SCP upload, rolling restart via systemd with pre/post health verification.
- GitHub Actions CI pipeline — test → fmt → clippy → cross-compile (Windows x64, Linux x64, Linux ARM64) → auto-release on tag push
- Pre-built binaries — all three platform binaries attached to every GitHub Release automatically
- Issue templates — Bug Report, Game Request, Feature Request
- CONTRIBUTING.md — full dev setup guide, game support guide, proxy hosting guide
- GitHub Discussions — community Q&A and announcements enabled
- 12 repo topics — rust, gaming, network-optimizer, ping-reducer, multiplayer, proxy, fortnite, cs2, dota2, open-source, udp, latency
| Node | Region | Packets Sent | Packets Recv | Loss | p50 | p95 | Throughput |
|---|---|---|---|---|---|---|---|
| proxy-lax | US-West (LA) | 9,131 | 9,131 | 0.00% | 214ms | 282ms | 129 pps |
| relay-sgp | Singapore | 22,742 | 22,742 | 0.00% | 31ms | 35ms | 320 pps |
Estimated capacity: 500–1,000+ concurrent clients per node. Free tier headroom: >99.9%.
- 20 Clippy lints across workspace
- 2 failing warp integration tests
0.2.0 — 2026-02-23
Major infrastructure deployment, real-world latency analysis, competitive feature development, and live integration test passing on 2-node Vultr mesh.
- XOR-based FEC codec —
protocol/src/fec.rswithFecEncoder/FecDecoder - Group K data packets → 1 parity packet for zero-retransmit loss recovery
FecHeaderextension (protocol v2): group_id, packet_index, group_size, parity flagFecStatstracking: encoded/decoded/recovered/lost counters- 8 unit tests: encode/decode, single-loss recovery, multi-group, stats, edge cases
- Integrated FEC into live tunnel pipeline (
client/src/main.rs)
client/src/warp.rs—WarpManagerfor automatic WARP detection and control- Detects WARP CLI installation and connection state
- Auto-connect/disconnect with state restoration on shutdown (
Dropimpl) - IP routing analysis: checks if traffic routes through WARP's excluded ranges
- Provides status, tunnel stats, and connection info
- 5-10ms latency improvement confirmed (203ms → 193ms from Bangkok)
- Bypasses ISP HGC Singapore detour via Cloudflare NTT backbone
client/src/redirect.rs—UdpRedirectlocal UDP proxy- Binds local port, intercepts game traffic, wraps in tunnel headers
- Forwards through proxy node, relays responses back to game client
- Supports per-game port configuration with automatic session management
- CLI:
--redirectflag for redirect mode vs capture mode
web/— GitHub Pages landing page with live benchmark data- Real E2E performance results, FEC explanation, WARP integration info
- Deployed at https://shibbityshwab.github.io/lightspeed/
- Dropped OCI San Jose — switched to Vultr-only infrastructure
- 3-node mesh deployed: Vultr LA (primary) + Vultr SGP (relay) + OCI SJ (decommissioned)
- Native binary deployment: ~500KB RAM per node (350x less than Docker)
- systemd service with sandboxing (DynamicUser, ProtectSystem, NoNewPrivileges)
- Protocol v2 — added FEC header extension (6 bytes: group_id, index, size, flags)
- Version field now supports v1 (plain) and v2 (FEC-enabled)
TunnelHeader::with_session_token()builder pattern addedmake_response()method for proxy-side header swapping
| Node | IP | Region | Latency (BKK) | RAM | Status |
|---|---|---|---|---|---|
| proxy-lax | [redacted] | us-west-lax | 206ms | 504KB | ✅ Active |
| relay-sgp | [redacted] | asia-sgp | 31ms | 496KB | ✅ Active |
- Relay strategy analysis: SGP relay does NOT reduce latency (31ms + 178ms = 209ms > 206ms direct)
- Pacific crossing bottleneck: ~172ms submarine cable physics, not routing
- ISP path analysis: True Internet → SBN/AWN → HGC Singapore (29ms detour identified)
- WARP analysis: CF BKK PoP → NTT backbone bypasses HGC detour, 5-10ms net improvement
- ExitLag gap: 6ms remaining (193ms vs 187ms) — premium BGP transit peering
- Live integration test passing (2026-02-23):
- proxy-lax: 204.8ms, 10/10 keepalives, 0.3ms jitter ✅
- relay-sgp: 34.0ms, 10/10 keepalives, 0.3ms jitter ✅
- E2E tunnel relay verified across all proxy nodes
- FEC module: 8 tests passing
- WARP IP routing logic: unit tested
- UDP redirect mode: tested with game traffic simulation
0.1.0 — 2026-02-22
The first release of LightSpeed — a zero-cost, open-source global network optimizer for multiplayer games.
- UDP Tunnel Engine — async packet relay with Tokio, keepalive, stats, and timeout handling
- Tunnel Header Protocol — efficient 20-byte binary header with encode/decode, session tokens, sequence numbers
- QUIC Control Plane — proxy discovery, health checks, and control messaging via quinn
- Game Profiles — built-in configurations for Fortnite, CS2, and Dota 2 (port ranges, server IPs, anti-cheat info)
- Route Selection Framework — nearest-proxy selector, multipath config, failover logic
- ML Route Prediction — feature extraction (11 features), synthetic training data, Random Forest model via linfa, heuristic fallback
- Packet Capture Abstraction — cross-platform capture trait with platform-specific backends (Windows/Linux/macOS)
- Configuration System — TOML-based config with CLI overrides via clap
- UDP Relay Loop — high-performance session-based packet relay with concurrent client support
- Session Management — token-based sessions with automatic timeout and cleanup
- Rate Limiting — per-IP and per-session rate limiting with configurable thresholds
- Abuse Detection — destination validation, amplification prevention, private IP blocking
- Authentication — lightweight token-based client authentication
- Metrics — Prometheus-compatible metrics endpoint (connections, packets, bytes, latency)
- Health Endpoint — HTTP health check for monitoring and load balancing
- QUIC Control Server — control plane for client discovery and health probing
- Binary Header Format — 20-byte tunnel header: version, flags, sequence, timestamp, original IPs and ports
- Control Messages — Binary-encoded control protocol (Ping, Pong, Register, RegisterAck, Disconnect, ServerInfo)
- Shared Types — common types used by both client and proxy
- Full architecture design (
docs/architecture.md) - Protocol specification (
docs/protocol.md) - Security audit report (
docs/security-audit-mvp.md) - Integration test report (
docs/test-report-mvp.md)
- 52 tests total, 100% pass rate
- End-to-end tunnel lifecycle tests
- Concurrent client relay tests
- Security integration tests (spoofed tokens, rate limiting, abuse detection)
- Performance benchmarks (162μs tunnel overhead)
- Token-based session authentication
- Per-IP and per-session rate limiting
- Destination validation (blocks private IPs, localhost, multicast)
- Amplification attack prevention
- No Critical or High findings in security audit
- Language: Rust (2021 edition)
- Async Runtime: Tokio
- Tunnel Protocol: Custom 20-byte UDP header, unencrypted for transparency
- Control Plane: QUIC via quinn (feature-gated)
- Target Overhead: ≤5ms (achieved: 162μs average)
- Supported Platforms: Windows x64, Linux x64, Linux ARM64