Skip to content

A fast, no-reference video quality benchmarking tool using BRISQUE and other IQA metrics. Extracts sampled frames, computes perceptual quality scores, and compares encodes objectively.

License

Notifications You must be signed in to change notification settings

turc1656/vqbench

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 

Repository files navigation

vqbench

License Python Platform ffmpeg uv

A fast, no-reference video quality benchmarking tool using BRISQUE and other IQA metrics. Extracts sampled frames, computes perceptual quality scores, and compares encodes objectively.

It uses the BRISQUE no-reference image quality metric (via pyiqa) to sample frames from each video and compute an average quality score. Lower BRISQUE scores correspond to higher perceptual quality.


Quickstart (TL;DR)

  1. Install prerequisites:

  2. Run a comparison:

uv run compare_videos.py /path/to/video1.mkv /path/to/video2.mkv

Features

  • Compares two video files (e.g. x264 vs x265 encodes of the same content).
  • No reference video required – works when all you have are the transcodes.
  • Uses BRISQUE (no-reference IQA) via pyiqa.
  • Automatically samples a reasonable number of frames per video (default ~180 frames).
  • Optionally lets you override the sampling rate in frames-per-second.
  • Displays progress for:
    • Frame extraction (via ffmpeg)
    • Quality computation across frames (with multiprocessing)

Requirements

  • Python ≥ 3.10
  • uv installed on your system (used to run the script and resolve dependencies).
  • ffmpeg and ffprobe available on your PATH (used to probe durations and extract frames).

Installation & Usage

You don’t need to manually create a virtual environment or install Python packages.
The script uses uv’s script metadata to declare its dependencies, so you can just run it directly:

uv run compare_videos.py /path/to/video1.mkv /path/to/video2.mkv

Optional arguments

  • --sample-rate FLOAT
    Frames per second to sample from each video. Example:

    uv run compare_videos.py video1.mkv video2.mkv --sample-rate 0.5

    This would sample one frame every 2 seconds (~0.5 fps).

If --sample-rate is omitted, the script:

  1. Uses ffprobe to get the video duration.
  2. Chooses an automatic fps such that ~180 frames are sampled per video, regardless of length.

How It Works

  1. Probe video duration
    Uses ffprobe to get the length (in seconds) of each video.

  2. Choose sampling rate

    • If --sample-rate is provided, it uses your value.
    • Otherwise, it computes an fps that yields ~180 frames per video.
  3. Extract frames
    Uses ffmpeg with a filter like:

    ffmpeg -i input.mkv -vf fps=<computed_fps> -q:v 2 frame_%06d.jpg

    Frames are written into a temporary directory and cleaned up automatically when done.

  4. Compute BRISQUE on each frame
    For each frame:

    • The script loads a BRISQUE model via pyiqa.
    • Scores each frame (skipping degenerate frames where BRISQUE fails).
    • Uses multiprocessing to speed up evaluation.
  5. Aggregate results

    • Scores are averaged per video.
    • Final printout summarizes each BRISQUE score and which video appears better.

Example Output

Using CPU for IQA metrics.
No sample-rate provided; using auto default: 0.50022 fps (~1 frame every 2.0 sec)
Extracting frames from video1.mkv at 0.50022 fps (~180 frames, ~1 every 2.0 sec)...
Extracting video1.mkv: 100%|███████████████████████████████████| 180/180 [00:18,  9.94frame/s]
Extracted 180 frames for analysis.

Computing BRISQUE across 180 frames using 3 workers...
100%|███████████████████████████████████████████████████████████| 180/180 [00:25,  7.12frame/s]

...

=== RESULTS (lower BRISQUE is better) ===

video1 (/path/to/video1.mkv):
  BRISQUE: 71.9572

video2 (/path/to/video2.mkv):
  BRISQUE: 68.1234

=> video2 appears higher quality (lower BRISQUE).

Notes & Limitations

  • No-reference metric: BRISQUE is an opinionated, no-reference quality metric. It’s useful for comparing encodes, but it’s not a perfect proxy for human perception.
  • Same content recommended: You should compare two encodes of the same underlying content (same movie/episode) for the scores to be meaningful.
  • Performance:
    • Frame extraction cost scales with video length and sampling fps.
    • BRISQUE itself can be relatively heavy, especially on CPU.
    • The script uses multiprocessing to speed up scoring.
  • GPU support:
    • If PyTorch detects a CUDA-capable GPU, the script will use cuda; otherwise it’ll fall back to CPU.
    • This is controlled automatically by torch.cuda.is_available().

Safety / Privacy

The script:

  • Does not hard-code any user-specific paths or credentials.
  • Does not send any video or frame data over the network.
  • Only uses local temporary directories and deletes them when complete.

If you publish your copy to GitHub, you should be safe as long as you haven’t added any secrets or personal paths elsewhere in the repo.


License

This project is licensed under the MIT License.
You’re free to use, modify, and distribute it, provided you keep the copyright and license notice.

See LICENSE for full details.

About

A fast, no-reference video quality benchmarking tool using BRISQUE and other IQA metrics. Extracts sampled frames, computes perceptual quality scores, and compares encodes objectively.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages