Skip to content

puyln/PRISM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

PRISM: Plain MRI Recognition and Interpretable System for hepatic Malignancy

PRISM is a state-of-the-art, multi-stage deep learning framework designed for automated, gadolinium-free diagnosis of Focal Liver Lesions (FLLs) using only non-contrast MRI (NC-MRI).

Developed and validated on a massive multicenter cohort of 12,823 patients from 9 institutions, PRISM addresses the limitations of dynamic contrast-enhanced MRI (DCE-MRI) by providing a fast, non-invasive, and expert-level diagnostic tool.

✨ Features

  • State-of-the-Art Model: Utilizes the powerful Uniformer-B architecture, designed for temporal and volumetric data.
  • Cross-Validation: Implements a robust 5-fold cross-validation strategy for training and evaluation to ensure model generalization.
  • End-to-End Workflow: Provides clear, executable scripts for the entire pipeline: data preparation, training, inference, and model ensembling.
  • Reproducibility: Includes clear instructions to set up the environment and run the experiments.

🚀 Getting Started

Follow these instructions to set up and run the project on your local machine.

Prerequisites

  • Python 3.8+
  • PyTorch
  • pip for package management

Installation

  1. Clone the repository:

    git clone https://github.com/puyln/PRISM.git
    cd PRISM
  2. Install the required dependencies:

    pip install -r requirements.txt

⚙️ Usage Workflow

The project is structured into a simple, three-step process.

1. Data Preparation

  • Input Data: This model is designed to work with lesion-centered 3D ROIs. Place your dataset in the appropriate directory structure as referenced by the scripts, likely under ./data/classification_dataset.
  • Data Splits: We provide pre-defined data splits for 5-fold cross-validation. You can find the label files in ./data/classification_dataset/labels. You can also generate your own splits as needed.

2. Model Training

To train the models, you first need the base pre-trained weights for Uniformer-B.

  • Download Pre-trained Model:

    1. The model is initialized with weights from the official Uniformer-B model, pre-trained on the Kinetics-400 dataset. You can find the official repository here: Sense-X/UniFormer Official GitHub
    2. You will need to download the specified model version (Kinetics-400, #Frame:8x1x4, Sampling Stride:8).
    3. Since the downstream task has a different classification head, you must remove any layers with mismatching shapes from the weight file (e.g., the final classification layer) to create a "pruned" version.
    4. Place the processed weight file in the ./pretrained_weights/ directory.
  • Run the Training Script: Once the pre-trained model is in place, you can start training the two sets of 5-fold cross-validation models (10 models in total) using the following command:

    sh ./main/train.sh

    The training progress and resulting model checkpoints will be saved in their respective directories.

3. Prediction and Ensembling

  • Run Inference: Execute the prediction script to generate scores from all 10 trained models.

    sh ./main/result.sh

    This will generate prediction files in the subfolders of ./results/.

  • Ensemble the Results: Finally, merge the scores from all prediction files to produce a single, more robust prediction.

    sh ./main/ensembling.sh

    The final ensembled prediction file will be created at ./results/merged_score.json.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors