Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
177 changes: 177 additions & 0 deletions SETUP_GUIDE.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,177 @@
# SpineNet Setup Guide - Python Development

## Quick Setup for Local Machine (MacOS)

### Automated Setup
```bash
cd /Users/kienha/spinet-v2
./local_setup.sh
```

### Manual Setup
```bash
cd /Users/kienha/spinet-v2

# Create virtual environment
python3 -m venv spinenet-venv
source spinenet-venv/bin/activate

# Install dependencies
pip install --upgrade pip
pip install -r requirements.txt

# Set PYTHONPATH
export PYTHONPATH=$PYTHONPATH:/Users/kienha/spinet-v2
```

### Run Test Script
```bash
source spinenet-venv/bin/activate
export PYTHONPATH=$PYTHONPATH:/Users/kienha/spinet-v2
python test_spinenet.py
```

**Note**: Your Mac will run on CPU (no CUDA). The test script is already configured for CPU mode.

---

## Setup for Vast.ai GPU Machine

### 1. Choose a Vast.ai Instance
- Select an instance with:
- CUDA-enabled GPU (e.g., RTX 3090, RTX 4090, A6000)
- At least 16GB RAM
- PyTorch template or Ubuntu 20.04/22.04

### 2. SSH into Your Vast.ai Instance
```bash
ssh -p [PORT] root@[IP_ADDRESS]
```

### 3. Upload and Run Setup Script
```bash
# From your local machine, upload the setup script
scp -P [PORT] vast_setup.sh root@[IP_ADDRESS]:~/

# SSH into the machine
ssh -p [PORT] root@[IP_ADDRESS]

# Run the setup script (uses main branch by default)
chmod +x vast_setup.sh
./vast_setup.sh

# Or specify a different branch
./vast_setup.sh --branch test-run-project
./vast_setup.sh --branch your-feature-branch
```

### 4. Run Python Scripts
```bash
cd spinet-v2
source spinenet-venv/bin/activate
python test_spinenet.py
```

---

## Development with PyCharm/VSCode

### VSCode Setup
1. Open project folder: `/Users/kienha/spinet-v2` (local) or `~/SpineNetV2` (vast.ai)
2. Press `Cmd+Shift+P` (Mac) or `Ctrl+Shift+P` (Linux)
3. Type "Python: Select Interpreter"
4. Choose `./spinenet-venv/bin/python`

### PyCharm Setup
1. Open project folder
2. Go to: Settings → Project → Python Interpreter
3. Add interpreter → Existing environment
4. Select: `spinenet-venv/bin/python`

### Important Note
Make sure PYTHONPATH includes the project root:
```bash
export PYTHONPATH=$PYTHONPATH:$(pwd)
```

---

## Test Script Usage

The `test_spinenet.py` script demonstrates the core SpineNet pipeline:

```python
# Local Mac (CPU)
device = 'cpu'

# Vast.ai GPU (CUDA)
device = 'cuda:0'
```

It will:
1. Download example T2 lumbar MRI scan (~2.5MB)
2. Download pre-trained model weights (~2-3GB, first run only)
3. Detect and label vertebrae (T11-S1)
4. Extract intervertebral discs (IVDs)
5. Grade IVDs for various spinal conditions
6. Save results to `results/test_results.csv`

---

## Performance Expectations

### Local Mac (CPU)
- Initial setup: ~5 minutes
- Model weight download: ~5 minutes (first run only)
- Single scan processing: ~5-10 minutes
- Good for: Testing, development, small batches

### Vast.ai GPU (CUDA)
- Initial setup: ~5 minutes
- Model weight download: ~3 minutes (first run only)
- Single scan processing: ~30-60 seconds
- Good for: Production, batch processing, research

---

## Troubleshooting

### Import Errors
Make sure PYTHONPATH is set:
```bash
export PYTHONPATH=$PYTHONPATH:$(pwd)
```

Or add this to your Python scripts:
```python
import sys
from pathlib import Path
sys.path.insert(0, str(Path.cwd()))
```

### CUDA Out of Memory
If you get CUDA OOM errors on vast.ai:
- Try a GPU with more VRAM (24GB+)
- Process scans one at a time
- Check for other processes using GPU: `nvidia-smi`

### Missing matplotlib Error
```bash
pip install matplotlib
```

### Missing DICOM Metadata
The example scans have missing metadata that's handled by `overwrite_dict` in the test script. For your own scans, ensure they have proper DICOM headers or add appropriate overwrites.

---

## Next Steps

1. Run `test_spinenet.py` to verify setup
2. Read the test script to understand the API
3. Develop your own scripts using PyCharm/VSCode
4. Use `test_spinenet.py` as a template for your own DICOM processing

For questions about the SpineNet model or grading schemes, see:
- Paper: http://zeus.robots.ox.ac.uk/spinenet2/
- GitHub: https://github.com/rwindsor1/SpineNetV2
46 changes: 46 additions & 0 deletions local_setup.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
#!/bin/bash
set -e

echo "=== SpineNet Local Setup (MacOS) ==="

# Get the directory where this script is located
SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
cd "$SCRIPT_DIR"

# Create virtual environment
echo "Creating virtual environment..."
if [ ! -d "spinenet-venv" ]; then
python3 -m venv spinenet-venv
else
echo "Virtual environment already exists, skipping..."
fi

# Activate virtual environment
echo "Activating virtual environment..."
source spinenet-venv/bin/activate

# Upgrade pip
echo "Upgrading pip..."
pip install --upgrade pip

# Install dependencies
echo "Installing dependencies..."
pip install -r requirements.txt

# Set PYTHONPATH
echo "Setting PYTHONPATH..."
export PYTHONPATH=$PYTHONPATH:$SCRIPT_DIR

echo ""
echo "=== Setup Complete! ==="
echo ""
echo "Virtual environment created at: $SCRIPT_DIR/spinenet-venv"
echo ""
echo "To activate the environment in the future, run:"
echo " source spinenet-venv/bin/activate"
echo " export PYTHONPATH=\$PYTHONPATH:$SCRIPT_DIR"
echo ""
echo "To run the test script:"
echo " python test_spinenet.py"
echo ""
echo "⚠ Note: Your Mac will use CPU (no CUDA). The test script is already configured for CPU."
2 changes: 1 addition & 1 deletion requirements.txt
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
jupyterlab
matplotlib
numpy
opencv-python
pandas
Expand Down
2 changes: 1 addition & 1 deletion spinenet/models/appearance.py
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ def load_weights(self, save_path, verbose=True):
x for x in glob.glob(save_path + "/*.pt") if "encrypted" not in x
]
latest_pt = max(list_of_pt, key=os.path.getctime)
checkpoint = torch.load(latest_pt, map_location="cpu")
checkpoint = torch.load(latest_pt, map_location="cpu", weights_only=False)
self.load_state_dict(checkpoint["net"])
best_loss = checkpoint["loss"]
start_epoch = checkpoint["epoch"] + 1
Expand Down
2 changes: 1 addition & 1 deletion spinenet/models/context.py
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ def load_weights(self, save_path, verbose=True):
if os.path.isdir(save_path):
list_of_pt = glob.glob(save_path + "/*.pt")
latest_pt = max(list_of_pt, key=os.path.getctime)
checkpoint = torch.load(latest_pt, map_location="cpu")
checkpoint = torch.load(latest_pt, map_location="cpu", weights_only=False)
self.load_state_dict(checkpoint["net"])
best_loss = checkpoint["loss"]
start_epoch = checkpoint["epoch"] + 1
Expand Down
2 changes: 1 addition & 1 deletion spinenet/models/grading.py
Original file line number Diff line number Diff line change
Expand Up @@ -330,7 +330,7 @@ def load_weights(self, save_path: str, verbose: bool = True) -> None:
if os.path.isdir(save_path):
list_of_pt = glob.glob(save_path + "/*.pt")
latest_pt = max(list_of_pt, key=os.path.getctime)
checkpoint = torch.load(latest_pt, map_location="cpu")
checkpoint = torch.load(latest_pt, map_location="cpu", weights_only=False)
self.load_state_dict(checkpoint["model_weights"])
start_epoch = checkpoint["epoch_no"] + 1
if verbose:
Expand Down
2 changes: 1 addition & 1 deletion spinenet/models/vfr.py
Original file line number Diff line number Diff line change
Expand Up @@ -183,7 +183,7 @@ def load_weights(self, save_path, verbose=True):
]

latest_pt = max(list_of_pt, key=os.path.getctime)
checkpoint = torch.load(latest_pt, map_location="cpu")
checkpoint = torch.load(latest_pt, map_location="cpu", weights_only=False)

self.load_state_dict(checkpoint["net"])
best_loss = checkpoint["loss"]
Expand Down
4 changes: 2 additions & 2 deletions spinenet/utils/extract_volumes.py
Original file line number Diff line number Diff line change
Expand Up @@ -43,8 +43,8 @@ def extract_volumes(

rotated_scan, new_bb = straighten_bb(scan, detection_poly)

new_bb_width = new_bb[:, 1].ptp()
new_bb_height = new_bb[:, 0].ptp()
new_bb_width = np.ptp(new_bb[:, 1])
new_bb_height = np.ptp(new_bb[:, 0])
edge_len = np.max([new_bb_height, new_bb_width])
y_min = int(new_bb[-1, 1] - extent * edge_len)
x_min = int(new_bb[-1, 0] - extent * edge_len)
Expand Down
2 changes: 1 addition & 1 deletion spinenet/utils/scan_preprocessing.py
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ def normalize_patch(patch : np.ndarray,
lower_percentile_val = np.percentile(patch, lower_percentile)
robust_range = np.abs(upper_percentile_val - lower_percentile_val)
if upper_percentile_val == lower_percentile_val:
patch = (patch - patch.min()) / (patch.ptp() + 1e-9)
patch = (patch - patch.min()) / (np.ptp(patch) + 1e-9)
else:
patch = (patch - patch.min()) / (robust_range + 1e-9)

Expand Down
88 changes: 88 additions & 0 deletions test_spinenet.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,88 @@
#!/usr/bin/env python3
"""
Quick test script for SpineNet
Can be used to verify installation without running Jupyter
"""

import sys
from pathlib import Path
sys.path.insert(0, str(Path.cwd()))

import os
import spinenet
from spinenet import SpineNet, download_example_scan
from spinenet.io import load_dicoms_from_folder

def main():
print("=" * 60)
print("SpineNet Quick Test")
print("=" * 60)

# Create directories
os.makedirs('example_scans', exist_ok=True)
os.makedirs('results', exist_ok=True)

# Download example scan
print("\n[1/5] Downloading example scan...")
scan_name = 't2_lumbar_scan_2'
download_example_scan(scan_name, file_path='example_scans')
print(f"✓ Downloaded {scan_name}")

# Download weights
print("\n[2/5] Downloading model weights...")
spinenet.download_weights(verbose=True, force=False)
print("✓ Weights downloaded")

# Check device
import torch
device = 'cuda:0' if torch.cuda.is_available() else 'cpu'
print(f"\n[3/5] Initializing SpineNet on device: {device}")
if device == 'cpu':
print(" ⚠ Warning: Running on CPU. This will be slower than GPU.")

# Initialize SpineNet
spnt = SpineNet(device=device, verbose=True)

# Load scan
print("\n[4/5] Loading DICOM scan...")
overwrite_dict = {
'SliceThickness': [2],
'ImageOrientationPatient': [0, 1, 0, 0, 0, -1]
}
scan = load_dicoms_from_folder(
f'example_scans/{scan_name}',
require_extensions=False,
metadata_overwrites=overwrite_dict
)
print(f"✓ Loaded scan with shape: {scan.volume.shape}")
print(f" - Pixel spacing: {scan.pixel_spacing} mm")
print(f" - Slice thickness: {scan.slice_thickness} mm")

# Detect vertebrae
print("\n[5/5] Detecting vertebrae...")
vert_dicts = spnt.detect_vb(scan.volume, scan.pixel_spacing)
detected_labels = [v["predicted_label"] for v in vert_dicts]
print(f"✓ Detected {len(vert_dicts)} vertebrae: {detected_labels}")

# Grade IVDs
print("\nGrading intervertebral discs...")
ivd_dicts = spnt.get_ivds_from_vert_dicts(vert_dicts, scan.volume)
ivd_grades = spnt.grade_ivds(ivd_dicts)

# Display results
print("\n" + "=" * 60)
print("GRADING RESULTS")
print("=" * 60)
print(ivd_grades)

# Save results
output_file = f'results/{scan_name}_test_results.csv'
ivd_grades.to_csv(output_file)
print(f"\n✓ Results saved to: {output_file}")

print("\n" + "=" * 60)
print("Test completed successfully!")
print("=" * 60)

if __name__ == '__main__':
main()
Loading