Skip to content

Kiemen-Lab/CODAvision

Repository files navigation

CODAvision

bioRxiv License: MIT

CODAvision is an open-source Python package designed for semantic segmentation of biomedical images through a user-friendly interface.


Table of Contents

  1. System Requirements
  2. Installation Guide
  3. Demo
  4. Adding Custom Model Architectures

1. System Requirements

🧰 Hardware

  • Minimum Requirements:

    • Computer with ≥16 GB RAM
    • NVIDIA GPU with ≥8 GB VRAM (Windows/Linux only)
    • Operating System: Windows 10/11, macOS 11+, or Linux
    • Storage: ≥2.5 GB free space
  • Tested Configuration:

    • Workstation with 128 GB RAM
    • NVIDIA GeForce RTX 4090 GPU
    • Operating System: Windows 11

🖥️ Software

  • CODAvision Repository
  • Python IDE (optional, e.g., PyCharm, Visual Studio, Spyder)
  • Image Annotation Tool (choose one):
    • Aperio ImageScope

    • QuPath

      ⚠️ Note for QuPath Users:
      To use the GUI-guided workflow in CODAvision with annotations created in QuPath, you must first export the annotations for each image as GeoJSON files via File > Export Objects as GeoJSON.
      These GeoJSON files must then be converted into XML format, which is compatible with CODAvision.
      You can perform this conversion using the scripts provided in the following repository: GeoJSON2XML.


⚙️ 2. Installation Guide

Step 1: Install Miniconda

Download and install Miniconda by following the instructions provided here.


Step 2: Create and Activate CODAvision Environment

For Windows and Linux:

conda create -n CODAvision python=3.9.19
conda activate CODAvision

For macOS:

  • Apple Silicon with GPU support (M1/M2/M3/M4) — requires Python 3.10+:
conda create -n CODAvision python=3.10
conda activate CODAvision

Step 3: Install CUDA Toolkit and cuDNN

For Windows and Linux only:

Ensure that CUDA drivers are installed as per the instructions here. Then, install the CUDA Toolkit and cuDNN:

conda install -c conda-forge cudatoolkit=11.2 cudnn=8.1.0

For macOS users: Skip this step.


Step 4: Install CODAvision

⚠️ Note:
Ensure Git is installed. If not, download it from here.

For Windows and Linux:

pip install -e git+https://github.com/Kiemen-Lab/CODAvision.git#egg=CODAvision

For macOS:

  • Apple Silicon with GPU acceleration (M1/M2/M3/M4):
pip install -e "git+https://github.com/Kiemen-Lab/CODAvision.git#egg=CODAvision[macos-silicon]"

This installs tensorflow-macos, tensorflow-metal, and other dependencies. Do not install keras separately (it's included).

After installation, restart your IDE and reactivate the environment:

conda activate CODAvision

💡 Alternative installation option: You can also clone the repository first and install dependencies locally:

git clone https://github.com/Kiemen-Lab/CODAvision.git
cd CODAvision
pip install -e .

🖼️ Step 5: Launch CODAvision GUI

After completing the installation, run the following command to launch the GUI:

python CODAvision.py

⏱️ Typical Installation Time: Approximately 10–15 minutes on a standard desktop computer.


🎬 3. Demo

📂 Sample Dataset

Access the sample dataset here.

📝 Instructions to Run on Sample Data

Access the demo instructions here.

📊 Expected Output

Access the expected output here.

⏳ Expected Runtime

  • GPU-Powered Workstation: Approximately 2–3 hours for model training and image processing.
  • Desktop Computer with no GPU: Image processing and training time may extend up to 10 hours.

🔧 4. Adding Custom Model Architectures

Adding Custom Model Architectures

CODAvision uses a flexible plugin-based architecture that allows you to easily integrate new segmentation models.

To add your own model architecture:

  1. Review the comprehensive guide in MODEL_PLUGIN_ARCHITECTURE.md
  2. Follow the abstract base class pattern to ensure compatibility
  3. Register your model in the factory function
  4. Your model will automatically appear in the GUI and training pipeline

The plugin architecture supports:

  • TensorFlow/Keras models (current) (PyTorch support coming soon...)
  • Multi-framework model registry
  • Seamless integration with existing workflows

For comprehensive guidance on annotation dataset creation, see the CODAvision Protocol.


Releases

No releases published

Packages

No packages published

Contributors 5

Languages