This repository contains implementations and pre-trained models for various image restoration tasks including denoising and deblurring. The models have been tested on multiple datasets and configurations to evaluate their performance.
- Denoising Models: REDNet, DnCNN, Restormer, MaIR
- Deblurring Models: DeblurGANv2, Restormer, MaIR
Furthermore, there is a Gradio-based web demo available for interactive testing of the models.
- Python 3.11+
- GNU Make (optional, for downloading weights and datasets)
- CUDA-capable GPU (optional but recommended for faster inference and compatibility with pre-trained weights)
make install-packagesOr manually install the required packages:
pip install uv
uv pip install -r requirements.txt \
torch==2.7 torchvision --extra-index-url https://download.pytorch.org/whl/cu126 \
https://github.com/state-spaces/mamba/releases/download/v2.2.5/mamba_ssm-2.2.5+cu12torch2.7cxx11abiTRUE-cp311-cp311-linux_x86_64.whlNote:
- Use
uvfor better and faster dependency resolution. - The torch version should match mamba's supported versions. Adjust the CUDA version as needed.
- Find the pre-built wheels for mamba here.
# ~11.6 GB
make download-weightsThis is only required for running the full test suite or if you want to run the demo app with these datasets.
# ~2 GB
make download-datasetspython scripts/tests.pyThis will run tests for all models and tasks on the full datasets. All images and results are stored in the results/ directory. The test configurations are detailed below.
Small-scale tests for demo purposes:
python scripts/test_demo.pyThis will run tests for all models and tasks on a single image from representative datasets. All images and results are stored in the demo/ directory.
These tests cover a variety of image restoration tasks including denoising and deblurring using different datasets and models.
After running the tests, the results will be saved in results/ directory and a summary CSV file results/results_summary.csv.
- Gray Image
- Non-blind
- Sigmas: 15, 25, 50
- Datasets: Set12, BSD68, Urban100
- Models: REDNet (sig=50 only), DnCNN, Restormer
- Blind
- Sigmas: 15, 25, 50
- Datasets: Set12, BSD68, Urban100
- Models: DnCNN, Restormer
- Non-blind
- Color Image
- Non-blind
- Sigmas: 15, 25, 50
- Datasets: CBSD68, Kodak, McMaster, Urban100
- Models: Restormer, MaIR
- Blind
- Sigmas: 15, 25, 50
- Datasets: CBSD68, Kodak, McMaster, Urban100
- Models: DnCNN, Restormer
- Non-blind
- Datasets: SIDD
- Models: Restormer, MaIR
- Datasets: DPDD
- Models: Restormer (single-image, dual-pixel)
- Datasets: GoPro, HIDE, RealBlur_J, RealBlur_R
- Models: DeblurGANv2 (fpn_inception, fpn_mobilenet), Restormer, MaIR
To run the Gradio-based web demo for interactive testing of the models:
gradio scripts/demo.pyNote:
- Avg. time: Average inference time per image (in seconds) on an NVIDIA Tesla T4 GPU.
- GFLOPs: Number of floating-point operations (in billions) for processing a 256x256 image.
-
The original implementations of the models used in this repository are credited to their respective authors:
-
The tool for converting Caffe models (used in REDNet) to PyTorch is credited to caffemodel2pytorch.
-
The datasets used for testing are provided by Restormer repository.



