A full-stack web application that analyzes and visualizes FIO (Flexible I/O Tester) benchmark results. The application provides comprehensive storage performance analysis with interactive charts and supports automated performance testing workflows.
- Interactive charts powered by Chart.js with advanced controls
- Support for IOPS, bandwidth, latency, and latency percentiles
- Separate visualization of read and write operations
- Multiple chart templates with sorting, grouping, and filtering
- Export capabilities (PNG/CSV) and fullscreen mode
- Series visibility toggles and real-time chart manipulation
- Import FIO JSON results directly through web interface
- Support for multi-job FIO test files
- Automated extraction of performance metrics and latency percentiles
- Command-line upload via curl/API
- Track server hostname, storage protocol (NFS, iSCSI, etc.)
- Custom test descriptions and categorization
- Filter and organize tests by infrastructure details
- Hierarchical Data Structure: Data is organized in a 4-level hierarchy (Host → Host-Protocol → Host-Protocol-Type → Host-Protocol-Type-Model) for powerful filtering and comparison
- Production-ready shell script with .env file configuration
- Multiple block sizes (4k, 64k, 1M) and I/O patterns
- Configurable test parameters and automatic upload
- Comprehensive error handling and progress reporting
- Environment variable override support
- Direct download from application server
- SQLite database with comprehensive schema
- Test run management with edit/delete capabilities
- Performance metrics with operation-type separation, including p95/p99 latency values
- Role-based access control (admin vs upload-only users)
- bcrypt password hashing with secure credential storage
- Custom authentication forms (no browser popups)
- Comprehensive request logging and user activity tracking
- External authentication file management via Docker volumes
- Python (v3.8+) and pip
- SQLite3
- FIO (for performance testing)
- curl (for script uploads)
- Docker and Docker Compose
- FIO (on client machines for testing)
- curl or wget (for script downloads)
The application runs in a single consolidated Docker container:
# Clone repository
git clone <repository-url>
cd fio-analyzer
# Build and run with Docker Compose
cd docker
docker compose up --build
# For production deployment
docker compose -f compose.prod.yml up -dThe application will be available at http://localhost:80.
# Create authentication directories
mkdir -p docker/data/auth
# Setup admin users (full access)
docker exec -it fio-app python scripts/manage_users.py add --admin --username admin --password your_password
# Setup upload-only users (restricted access)
docker exec -it fio-app python scripts/manage_users.py add --username uploader --password your_password# Download from your running application
wget http://your-server/fio-test.sh
# Setup configuration
chmod +x fio-test.sh
# Use --generate-env to create a .env file
./fio-test.sh --generate-env
# Edit .env with your settingsFor local development with separate frontend/backend:
-
Navigate to the frontend directory:
cd frontend -
Install Dependencies:
npm install
-
Start the Development Server:
npm run dev
The application will be available at
http://localhost:5173. -
Code Quality & Linting:
npm run lint # Run ESLint for code quality checks npm run type-check # Run TypeScript compiler for type checking npm run build # Build for production (includes type checking)
-
Navigate to the backend directory:
cd backend -
Create Python Virtual Environment:
python3 -m venv venv source venv/bin/activate -
Install Dependencies (choose one):
Option A - Using uv (recommended, faster):
uv sync
Option B - Using traditional pip:
pip install fastapi uvicorn python-multipart bcrypt python-jose
-
Run the Backend Server:
With uv:
uv run uvicorn main:app --reload --host 0.0.0.0 --port 8000
With traditional setup:
uvicorn main:app --reload --host 0.0.0.0 --port 8000
The API will be available at
http://localhost:8000. -
Code Quality & Linting:
With uv (recommended):
uv run flake8 . # Python linting uv run black . && uv run isort . # Auto-format code uv run python -m py_compile main.py # Syntax check
With traditional setup (if flake8, black, isort installed):
flake8 . # Python linting black . && isort . # Auto-format code python -m py_compile main.py # Syntax check
The application uses SQLite for data storage. The database is initialized automatically when the backend server starts, creating:
test_runs- Test execution metadata including drive info, test parameters, hostname, protocol, and descriptionperformance_metrics- All performance data (IOPS, avg_latency, bandwidth, p95_latency, p99_latency) with operation-type separation
Sample data is populated automatically if the database is empty.
Older databases stored latency percentiles in a dedicated table latency_percentiles. Use the script below once to migrate existing records into performance_metrics and drop the obsolete table:
BEGIN;
INSERT INTO performance_metrics
(test_run_id, metric_type, value, unit, operation_type)
SELECT lp.test_run_id,
CASE lp.percentile
WHEN 95 THEN 'p95_latency'
WHEN 99 THEN 'p99_latency'
END AS metric_type,
ROUND(lp.latency_ns / 1e6, 3) AS value,
'ms' AS unit,
lp.operation_type
FROM latency_percentiles lp
WHERE lp.percentile IN (95,99)
AND NOT EXISTS (
SELECT 1
FROM performance_metrics pm
WHERE pm.test_run_id = lp.test_run_id
AND pm.metric_type = CASE lp.percentile
WHEN 95 THEN 'p95_latency'
WHEN 99 THEN 'p99_latency'
END
AND pm.operation_type = lp.operation_type
);
DROP TABLE IF EXISTS latency_percentiles;
COMMIT;After migration, restart the backend and the new percentile metrics will be available through all /api/time-series/ endpoints.
- Access the frontend at
http://localhost:5173to interact with the visualizer - Upload FIO JSON files via the web interface with metadata forms
- Select test runs and visualize performance data with interactive charts
The FIO testing script is available for download directly from your application server and provides automated testing with configurable parameters.
# Download script
wget http://your-server/fio-test.sh
# Make executable and setup configuration
chmod +x fio-test.sh
# Use --generate-env to create a .env file
./fio-test.sh --generate-env
# Edit .env with your specific settingsCreate a .env file for persistent configuration:
# Server Information
# These values form a 4-level hierarchy: Host → Host-Protocol → Host-Protocol-Type → Host-Protocol-Type-Model
HOSTNAME=myserver # Server identifier (use -vm suffix for VMs)
PROTOCOL=NFS # Storage protocol (NFS, iSCSI, Local, etc.)
DRIVE_TYPE=ssd # Drive type (hdd, ssd, nvme, raidz1, etc.; use vm- prefix for VMs)
DRIVE_MODEL=Samsung980PRO # Drive model (can include special params like "poolName-syncoff")
DESCRIPTION=Production performance test
# Test Parameters
TEST_SIZE=10M
NUM_JOBS=4
RUNTIME=30
# Backend Configuration
BACKEND_URL=http://your-server
USERNAME=admin
PASSWORD=admin
# Advanced Options
BLOCK_SIZES=4k,64k,1M
TEST_PATTERNS=read,write,randread,randwrite# Basic usage (uses .env configuration)
./fio-test.sh
# Override specific parameters
TEST_SIZE="1M" RUNTIME="5" ./fio-test.sh
# Custom configuration with environment variables
HOSTNAME="web01" PROTOCOL="iSCSI" DESCRIPTION="Production test" ./fio-test.sh
# View help and all configuration options
./fio-test.sh --help| Variable | Description | Default Value |
|---|---|---|
HOSTNAME |
Server hostname | Current hostname |
PROTOCOL |
Storage protocol (NFS, iSCSI, Local, etc.) | NFS |
DESCRIPTION |
Test description | "Automated performance test" |
TEST_SIZE |
Size of test file | 1G |
NUM_JOBS |
Number of parallel jobs | 4 |
RUNTIME |
Test runtime in seconds | 60 |
BACKEND_URL |
Backend API URL | http://localhost:8000 |
TARGET_DIR |
Directory for test files | ./fio_tmp/ |
The script automatically tests 12 combinations:
- Block Sizes: 4k, 64k, 1M
- I/O Patterns: read, write, randread, randwrite
- Total Tests: 3 × 4 = 12 tests per execution
The script provides colored progress output showing:
- Configuration summary
- Individual test progress (X/12)
- Upload status for each test
- Final summary with success/failure counts
- Automatic cleanup of temporary files
For continuous performance monitoring, you can set up a cron job to run FIO tests automatically on an hourly, daily, or custom schedule.
# Edit your crontab
crontab -e
# Add entry for hourly tests (runs at the top of every hour)
0 * * * * cd /path/to/your/scripts && ./fio-test.sh >> /var/log/fio-tests.log 2>&1
# Add entry for daily tests (runs at 2 AM every day)
0 2 * * * cd /path/to/your/scripts && ./fio-test.sh >> /var/log/fio-tests.log 2>&1
# Add entry for business hours only (9 AM to 5 PM, Monday-Friday)
0 9-17 * * 1-5 cd /path/to/your/scripts && ./fio-test.sh >> /var/log/fio-tests.log 2>&1Create a wrapper script for better control and logging:
# Create /path/to/your/scripts/fio-cron-wrapper.sh
#!/bin/bash
# Set environment variables
export PATH="/usr/local/bin:/usr/bin:/bin"
export HOSTNAME="$(hostname)"
export PROTOCOL="NVMe"
export DESCRIPTION="Automated hourly performance test"
export BACKEND_URL="http://your-server"
export USERNAME="your-upload-user"
export PASSWORD="your-password"
# Add timestamp to logs
echo "$(date): Starting FIO performance test" >> /var/log/fio-tests.log
# Run the test with timeout (kill after 30 minutes if stuck)
timeout 1800 /path/to/your/scripts/fio-test.sh >> /var/log/fio-tests.log 2>&1
# Log completion
echo "$(date): FIO test completed with exit code $?" >> /var/log/fio-tests.log# Make wrapper executable
chmod +x /path/to/your/scripts/fio-cron-wrapper.sh
# Add to crontab (hourly execution)
0 * * * * /path/to/your/scripts/fio-cron-wrapper.sh# Every 15 minutes
*/15 * * * * /path/to/your/scripts/fio-cron-wrapper.sh
# Every 6 hours
0 */6 * * * /path/to/your/scripts/fio-cron-wrapper.sh
# Twice daily (6 AM and 6 PM)
0 6,18 * * * /path/to/your/scripts/fio-cron-wrapper.sh
# Weekly on Sundays at 3 AM
0 3 * * 0 /path/to/your/scripts/fio-cron-wrapper.sh
# Monthly on the 1st at midnight
0 0 1 * * /path/to/your/scripts/fio-cron-wrapper.shTo prevent log files from growing too large:
# Create /etc/logrotate.d/fio-tests
/var/log/fio-tests.log {
daily
rotate 30
compress
delaycompress
missingok
notifempty
create 0644 root root
}
# Test log rotation
sudo logrotate -d /etc/logrotate.d/fio-testsCreate a monitoring script to check if tests are running successfully:
# Create /path/to/your/scripts/check-fio-health.sh
#!/bin/bash
LOG_FILE="/var/log/fio-tests.log"
BACKEND_URL="http://your-server"
# Check if log was updated in last 2 hours
if [ $(find "$LOG_FILE" -mmin -120 | wc -l) -eq 0 ]; then
echo "WARNING: FIO tests may not be running - no recent log activity"
fi
# Check backend connectivity
if ! curl -s "$BACKEND_URL/api/info" > /dev/null; then
echo "ERROR: Cannot reach FIO Analyzer backend at $BACKEND_URL"
fi
# Check for recent error patterns in logs
if tail -100 "$LOG_FILE" | grep -q "ERROR\|FAILED\|timeout"; then
echo "WARNING: Recent errors found in FIO test logs"
tail -20 "$LOG_FILE" | grep -E "ERROR|FAILED|timeout"
fi-
Test Script Manually First
# Ensure script works before adding to cron ./fio-test.sh -
Create Dedicated User (Recommended)
# Create user for FIO testing sudo useradd -m -s /bin/bash fio-tester sudo su - fio-tester # Setup script and cron for this user crontab -e
-
Configure Permissions
# Ensure test directory is writable mkdir -p /tmp/fio_test chmod 755 /tmp/fio_test # Ensure log directory exists sudo mkdir -p /var/log sudo touch /var/log/fio-tests.log sudo chown fio-tester:fio-tester /var/log/fio-tests.log
-
Test Cron Environment
# Add temporary test to cron * * * * * env > /tmp/cron-env.txt # Compare with shell environment diff <(env | sort) <(sort /tmp/cron-env.txt)
-
Monitor and Validate
# Check cron service is running sudo systemctl status cron # View cron logs sudo journalctl -u cron -f # Verify tests appear in FIO Analyzer curl -u username:password "http://your-server/api/time-series/latest"
# Check if FIO is installed
fio --version
# Test backend connectivity
curl http://localhost:8000/api/test-runs
# Run with verbose output (if issues occur)
# Edit the script and remove '2>/dev/null' from the fio command
# Check available space in target directory
df -h /tmp/fio_testGenerate FIO results and upload manually:
# Generate FIO JSON output
fio --name=test --rw=randwrite --bs=4k --size=1G \
--iodepth=16 --runtime=60 --time_based --group_reporting \
--output-format=json --output=result.json
# Upload via API with metadata
curl -X POST -F "file=@result.json" \
-F "drive_model=Samsung 980 PRO" \
-F "drive_type=NVMe SSD" \
-F "hostname=server01" \
-F "protocol=NFS" \
-F "description=Production benchmark" \
http://localhost:8000/api/importAll API endpoints require authentication. Use basic authentication with username/password.
- GET /api/test-runs - Retrieve all test runs with metadata
- PUT /api/test-runs/:id - Update test run drive information
- DELETE /api/test-runs/:id - Delete test run and associated data
- GET /api/test-runs/performance-data - Retrieve performance data for specific test runs
- Query params:
test_run_ids(comma-separated),metric_types(optional) - Returns: Test metadata, separated read/write metrics, latency percentiles
- Query params:
- POST /api/import - Import FIO JSON results with metadata
- Form data:
file(FIO JSON),drive_model,drive_type,hostname,protocol,description - Available to both admin users and upload-only users
- Form data:
- GET /api/filters - Get available filter options for drive types, models, patterns, and block sizes
- GET /fio-test.sh - Download the FIO testing script
Deploy the script across multiple servers for comprehensive infrastructure analysis:
# Server 1 - NFS storage
HOSTNAME="web01" PROTOCOL="NFS" DESCRIPTION="Web server NFS test" ./scripts/performance_test.sh
# Server 2 - iSCSI storage
HOSTNAME="db01" PROTOCOL="iSCSI" DESCRIPTION="Database server iSCSI test" ./scripts/performance_test.sh
# Server 3 - Local SSD
HOSTNAME="app01" PROTOCOL="Local" DESCRIPTION="Application server local SSD test" ./scripts/performance_test.shRun extended performance tests:
# Extended test with larger files and longer runtime
TEST_SIZE="50G" RUNTIME="600" NUM_JOBS="16" \
HOSTNAME="storage-test" PROTOCOL="iSCSI" \
DESCRIPTION="Extended load test - 50GB over 10 minutes" \
./scripts/performance_test.shThe application provides comprehensive performance analysis:
- IOPS - Input/Output Operations Per Second for read and write operations
- Bandwidth - Throughput in KB/s for read and write operations
- Latency - Average latency in milliseconds for read and write operations
- Latency Percentiles - P1, P5, P10, P20, P30, P40, P50, P60, P70, P80, P90, P95, P99, P99.5, P99.9, P99.95, P99.99
- Hierarchical Filtering: Filter by Host → Host-Protocol → Host-Protocol-Type → Host-Protocol-Type-Model
- Filter by drive model, drive type, storage protocol
- Search by hostname, test description
- Organize by block size and I/O patterns
- Time-based filtering and sorting
- App.tsx - Main application orchestrating data flow
- TestRunSelector - Multi-select dropdown for test runs
- TemplateSelector - Chart template/visualization picker
- InteractiveChart - Chart.js-powered data visualization
- Upload.tsx - FIO file upload interface with metadata forms
- main.py - FastAPI application with modular routers and database logic
- Database Schema - SQLite with test_runs, performance_metrics, and latency_percentiles tables
- Multi-job Import - Processes all jobs from FIO JSON files
- Metadata Support - Full infrastructure context tracking
-- Test execution metadata
test_runs (id, timestamp, drive_model, drive_type, test_name, block_size,
read_write_pattern, queue_depth, duration, fio_version, job_runtime,
rwmixread, total_ios_read, total_ios_write, usr_cpu, sys_cpu,
hostname, protocol, description)
-- Performance metrics with operation separation
performance_metrics (id, test_run_id, metric_type, value, unit, operation_type)
-- Detailed latency percentile data
latency_percentiles (id, test_run_id, operation_type, percentile, latency_ns)The application uses a consolidated single-container architecture:
- Frontend: React app served by nginx on port 80
- Backend: FastAPI API running internally on port 8000
- Reverse Proxy: nginx proxies
/api/*requests to backend - Static Files: Testing script and config served by nginx
- Build: Multi-stage Docker build for optimized production deployment
volumes:
- ./data/backend/db:/app/db # Database persistence
- ./data/backend/uploads:/app/uploads # Uploaded files
- ./data/auth/.htpasswd:/app/.htpasswd # Admin users
- ./data/auth/.htuploaders:/app/.htuploaders # Upload-only users# Development build
docker compose up --build
# Production deployment
docker compose -f compose.prod.yml up -d
# Using pre-built registry images
IMAGE_TAG=latest docker compose -f compose.prod.yml up -dContributions are welcome! Please open an issue or submit a pull request for any improvements or bug fixes.
- Fork the repository
- Create a feature branch
- Make your changes with appropriate tests
- Submit a pull request with a clear description
- Real-time performance monitoring
- Advanced statistical analysis
- Performance regression detection
- Multi-tenancy support
- REST API documentation with OpenAPI/Swagger