A fast, no-reference video quality benchmarking tool using BRISQUE and other IQA metrics. Extracts sampled frames, computes perceptual quality scores, and compares encodes objectively.
It uses the BRISQUE no-reference image quality metric (via pyiqa) to sample frames from each video and compute an average quality score. Lower BRISQUE scores correspond to higher perceptual quality.
-
Install prerequisites:
- Install uv → https://docs.astral.sh/uv/
- Install ffmpeg and make sure both
ffmpegandffprobeare on yourPATH
-
Run a comparison:
uv run compare_videos.py /path/to/video1.mkv /path/to/video2.mkv- Compares two video files (e.g. x264 vs x265 encodes of the same content).
- No reference video required – works when all you have are the transcodes.
- Uses BRISQUE (no-reference IQA) via
pyiqa. - Automatically samples a reasonable number of frames per video (default ~180 frames).
- Optionally lets you override the sampling rate in frames-per-second.
- Displays progress for:
- Frame extraction (via
ffmpeg) - Quality computation across frames (with multiprocessing)
- Frame extraction (via
- Python ≥ 3.10
- uv installed on your system (used to run the script and resolve dependencies).
- ffmpeg and ffprobe available on your
PATH(used to probe durations and extract frames).
You don’t need to manually create a virtual environment or install Python packages.
The script uses uv’s script metadata to declare its dependencies, so you can just run it directly:
uv run compare_videos.py /path/to/video1.mkv /path/to/video2.mkv-
--sample-rate FLOAT
Frames per second to sample from each video. Example:uv run compare_videos.py video1.mkv video2.mkv --sample-rate 0.5
This would sample one frame every 2 seconds (~0.5 fps).
If --sample-rate is omitted, the script:
- Uses
ffprobeto get the video duration. - Chooses an automatic fps such that ~180 frames are sampled per video, regardless of length.
-
Probe video duration
Usesffprobeto get the length (in seconds) of each video. -
Choose sampling rate
- If
--sample-rateis provided, it uses your value. - Otherwise, it computes an fps that yields ~180 frames per video.
- If
-
Extract frames
Usesffmpegwith a filter like:ffmpeg -i input.mkv -vf fps=<computed_fps> -q:v 2 frame_%06d.jpg
Frames are written into a temporary directory and cleaned up automatically when done.
-
Compute BRISQUE on each frame
For each frame:- The script loads a BRISQUE model via
pyiqa. - Scores each frame (skipping degenerate frames where BRISQUE fails).
- Uses multiprocessing to speed up evaluation.
- The script loads a BRISQUE model via
-
Aggregate results
- Scores are averaged per video.
- Final printout summarizes each BRISQUE score and which video appears better.
Using CPU for IQA metrics.
No sample-rate provided; using auto default: 0.50022 fps (~1 frame every 2.0 sec)
Extracting frames from video1.mkv at 0.50022 fps (~180 frames, ~1 every 2.0 sec)...
Extracting video1.mkv: 100%|███████████████████████████████████| 180/180 [00:18, 9.94frame/s]
Extracted 180 frames for analysis.
Computing BRISQUE across 180 frames using 3 workers...
100%|███████████████████████████████████████████████████████████| 180/180 [00:25, 7.12frame/s]
...
=== RESULTS (lower BRISQUE is better) ===
video1 (/path/to/video1.mkv):
BRISQUE: 71.9572
video2 (/path/to/video2.mkv):
BRISQUE: 68.1234
=> video2 appears higher quality (lower BRISQUE).
- No-reference metric: BRISQUE is an opinionated, no-reference quality metric. It’s useful for comparing encodes, but it’s not a perfect proxy for human perception.
- Same content recommended: You should compare two encodes of the same underlying content (same movie/episode) for the scores to be meaningful.
- Performance:
- Frame extraction cost scales with video length and sampling fps.
- BRISQUE itself can be relatively heavy, especially on CPU.
- The script uses multiprocessing to speed up scoring.
- GPU support:
- If PyTorch detects a CUDA-capable GPU, the script will use
cuda; otherwise it’ll fall back to CPU. - This is controlled automatically by
torch.cuda.is_available().
- If PyTorch detects a CUDA-capable GPU, the script will use
The script:
- Does not hard-code any user-specific paths or credentials.
- Does not send any video or frame data over the network.
- Only uses local temporary directories and deletes them when complete.
If you publish your copy to GitHub, you should be safe as long as you haven’t added any secrets or personal paths elsewhere in the repo.
This project is licensed under the MIT License.
You’re free to use, modify, and distribute it, provided you keep the copyright and license notice.
See LICENSE for full details.