A high-performance, production-ready video streaming proxy built with TypeScript and Node.js. Ultra Proxy is designed to be faster, more reliable, and more feature-rich than traditional streaming proxies.
- High Performance: Connection pooling, intelligent caching, and optimized HTTP client
- Scalability: Built-in clustering support for multi-core systems
- Security: Rate limiting, input validation, CORS support, and security headers
- Reliability: Robust error handling, retry mechanisms, and graceful degradation
- Monitoring: Comprehensive logging, metrics, and health checks
- Caching: Intelligent caching with TTL and size limits
- Compression: Automatic response compression for better performance
- Streaming: Efficient handling of video segments and playlists
- Docker: Production-ready Docker containerization
- Node.js 18.0.0 or higher
- npm or yarn
- Docker (optional, for containerized deployment)
-
Clone and install dependencies:
git clone https://github.com/friday2su/Ultra-Proxy.git cd ultra-proxy npm install -
Build the application:
npm run build
-
Start in development mode:
npm run dev
-
Start in production mode:
npm start
-
Build and run with Docker Compose:
docker-compose up --build
-
Build manually:
docker build -t ultra-proxy . docker run -p 3000:3000 ultra-proxy
Copy .env.example to .env and modify the values:
cp .env.example .env- Server: Port, host, connection limits, timeouts
- Security: Rate limiting, CORS origins, allowed domains
- Performance: Caching, compression settings
- Logging: Log level, format, request logging
- Retry: Retry attempts and delays
GET /api/m3u8-proxy?url=<encoded_url>&headers=<encoded_headers>
GET /m3u8-proxy.m3u8?url=<encoded_url>&headers=<encoded_headers> (legacy)
GET /api/ts-proxy?url=<encoded_url>&headers=<encoded_headers>
GET /ts-proxy.ts?url=<encoded_url>&headers=<encoded_headers> (legacy)
GET /health # Health check
GET /metrics # System metrics
GET / # API documentation
curl "http://localhost:3000/api/m3u8-proxy?url=https%3A%2F%2Fexample.com%2Fplaylist.m3u8"curl "http://localhost:3000/api/m3u8-proxy?url=https%3A%2F%2Fexample.com%2Fplaylist.m3u8&headers=%7B%22Authorization%22%3A%22Bearer%20token%22%7D"curl "http://localhost:3000/api/ts-proxy?url=https%3A%2F%2Fexample.com%2Fsegment_0.ts"- HTTP/HTTPS agent pooling for connection reuse
- Configurable maximum connections
- Keep-alive connections for better performance
- In-memory caching with TTL
- Cache size limits
- Content-type aware caching
- Automatic cache invalidation
- Automatic gzip/deflate compression
- Configurable compression thresholds
- Client preference detection
- Automatic multi-core utilization
- Worker process management
- Graceful worker restarts
- Configurable request limits per time window
- IP-based rate limiting
- Custom rate limit rules
- URL validation and sanitization
- Header validation
- Request size limits
- XSS protection
- Content type sniffing protection
- Frame options
- Referrer policy
- Structured JSON logging
- Request/response logging
- Performance metrics
- Error tracking
- System resource usage
- Request statistics
- Cache performance
- Response times
- Application health status
- Dependency checks
- Resource utilization
ultra-proxy/
βββ src/
β βββ routes/ # API route handlers
β βββ middleware/ # Express middleware
β βββ services/ # Business logic services
β βββ utils/ # Utility functions
β βββ types/ # TypeScript type definitions
βββ logs/ # Application logs
βββ dist/ # Compiled JavaScript
βββ docker-compose.yml # Docker orchestration
# Run tests
npm test
# Run tests in watch mode
npm run test:watch
# Run linting
npm run lint
# Fix linting issues
npm run lint:fix| Variable | Description | Default |
|---|---|---|
PORT |
Server port | 3000 |
HOST |
Server host | 0.0.0.0 |
MAX_CONNECTIONS |
Maximum concurrent connections | 1000 |
CACHE_ENABLED |
Enable caching | true |
CACHE_TTL |
Cache time-to-live (ms) | 300000 |
RATE_LIMIT_MAX_REQUESTS |
Max requests per window | 1000 |
LOG_LEVEL |
Logging level | info |
COMPRESSION_ENABLED |
Enable response compression | true |
-
Environment Setup:
export NODE_ENV=production export PORT=3000 export LOG_LEVEL=warn
-
Process Management:
# Using PM2 npm install -g pm2 pm2 start dist/index.js --name ultra-proxy # Using systemd sudo cp ultra-proxy.service /etc/systemd/system/ sudo systemctl enable ultra-proxy sudo systemctl start ultra-proxy
-
Reverse Proxy:
server { listen 80; server_name your-domain.com; location / { proxy_pass http://localhost:3000; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; } }
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests
- Submit a pull request
MIT License - see LICENSE file for details.
| Feature | ng-proxy | Ultra Proxy |
|---|---|---|
| Performance | Basic | High (connection pooling, caching) |
| Scalability | Single-threaded | Multi-core clustering |
| Security | Basic | Advanced (rate limiting, validation) |
| Monitoring | Console logs | Structured logging + metrics |
| Caching | None | Intelligent caching |
| Error Handling | Basic | Robust with retries |
| Docker | Basic | Production-ready |
| Configuration | Environment only | Comprehensive config system |
-
High Memory Usage:
- Check cache size limits
- Monitor with
/metricsendpoint - Adjust
MAX_CONNECTIONSsetting
-
Rate Limiting:
- Check rate limit configuration
- Monitor with logs
- Adjust limits in
.env
-
Connection Errors:
- Verify target URLs are accessible
- Check network connectivity
- Review retry configuration
NODE_ENV=development npm run dev- Create an issue on GitHub
- Check the logs in
logs/directory - Use
/healthand/metricsendpoints for diagnostics