Real-time collaborative applications require instant state synchronization across multiple clients. Traditional approaches fall short:
- Polling: High latency, wastes bandwidth, doesn't scale
- Long-polling: Better but still not real-time
- Third-party services: Expensive, vendor lock-in, limited customization
The challenge: Build a self-hosted, high-performance WebSocket server that can handle thousands of concurrent connections.
Built FlowState API — a real-time WebSocket collaboration engine in Go.
| Requirement | Go Advantage |
|---|---|
| Concurrent connections | Goroutines are lightweight (2KB vs 1MB threads) |
| Low latency | Compiled, no GC pauses for small allocations |
| Simplicity | Single binary deployment, no runtime deps |
| Memory efficiency | Handles 10K connections with minimal RAM |
flowchart TD
subgraph API [FlowState API]
direction TB
RM[Room Manager]
PUB[Pub/Sub In-Memory]
RM --> PUB
end
C1[Client WS] --> RM
C2[Client WS] --> RM
C3[Client WS] --> RM
classDef client fill:#0f172a,stroke:#3b82f6,stroke-width:2px,color:#fff;
classDef server fill:#064e3b,stroke:#10b981,stroke-width:2px,color:#fff;
class C1,C2,C3 client;
class RM,PUB,API server;
- Room-based Routing: Clients join rooms, messages broadcast only within rooms
- Goroutine per Connection: Each WebSocket gets its own goroutine
- Thread-safe Hub: Central message router with mutex protection
- Health Endpoint: HTTP
/for monitoring and load balancer checks
| Metric | Value |
|---|---|
| Message latency | < 1ms (sub-millisecond) |
| Concurrent connections | 10,000+ tested |
| Message loss | 0% |
| Memory per connection | ~10KB |
| Deployment | Railway (Docker) |
- Gorilla WebSocket is battle-tested and handles edge cases well
- Buffered channels prevent slow clients from blocking the hub
- Graceful shutdown is essential — clients need clean disconnect
- Railway works great for Go containerized deployments
- Live API: flowstate-api.edycu.dev
- Source Code: GitHub