Skip to content

starvingarc/BRIDGE

Repository files navigation

BRIDGE workflow overview

Status Version Python Docs Demo

BRIDGE

A developmental-reference evaluation platform for pre-transplant mDA progenitor cell products.

🌉 Candidate discovery, identity stability, and multidimensional developmental concordance.

🧭 Background

Stem-cell-based replacement therapy is an important regenerative strategy for Parkinsonian dopaminergic circuit repair. Pre-transplant mDA progenitor products are evaluated as developmentally staged cells with defined regional identity, fate stability, and subsequent differentiation potential.

BRIDGE uses human embryonic ventral midbrain references to guide candidate-cell discovery, target identity assessment, and multidimensional developmental concordance scoring. The workflow organizes single-cell evidence for quality control, process optimization, and cross-protocol comparison.

Evaluation layer Biological focus
Developmental reference Human embryonic ventral midbrain programs as the in vivo baseline.
Candidate identity Calibrated probability, prediction variability, and entropy.
Composite Likeness Score Identity, expression, transferability, neighborhood, trajectory, and regulon concordance.

✨ Workflow

Step Role Output
Step0 Prepare environment, config, model assets, and run directory. Ready-to-run workspace
Step1 Map one in vitro .h5ad against a whole-brain reference. RG candidate annotations and Step1 report
Step2 Refine mDA progenitor identity with probability and uncertainty. Candidate-bearing data, thresholds, probability tables, Step2 report
Step3 Quantify developmental concordance with CLS components A-F. Component scores, weighted CLS, single-dataset and protocol-comparison reports

🚀 Getting Started

Installation

pip install git+https://github.com/starvingarc/BRIDGE.git
# or, from a cloned source tree:
pip install -e ".[workflow]"

For agent-assisted setup, send this prompt to your coding agent:

Help me install https://github.com/starvingarc/BRIDGE

Agent-Guided Workflow

BRIDGE includes repository-local skills that guide an agent through reproducible Step0-Step3 notebooks. Use the prefix supported by your agent, for example /bridge-step1 or @bridge-step1.

Step Skill Output
Step0 bridge-step0 Environment, assets, config, and run directory
Step1 bridge-step1 Prescreened data, RG candidates, and notebook report
Step2 bridge-step2 Identity candidates, thresholds, probabilities, and notebook report
Step3 bridge-step3 CLS component scores and protocol comparison

Full copy-paste demo prompts are in docs/agent_demo.md. Model assets are declared in models/assets.json and fetched separately from public object storage.

Python Usage

from bridge.prescreen import prescreen
from bridge.identity import identify
from bridge.cls import CLSContext, component_A, component_B, component_C, component_D, component_E, component_F, score

from bridge.prescreen.report import write_report as write_prescreen_report
from bridge.identity.report import write_report as write_identity_report
from bridge.cls.report import write_report as write_cls_report, compare_reports

Each step is a Python function that can be used in notebooks or scripts. Report modules provide displayable table/figure helpers and writers for reproducible artifacts under report/.

🗺️ Explore

🛠️ Development

PYTHONPATH=src pytest -q
src/bridge/        Python package
configs/           public config templates
models/            model metadata and asset entry point
notebooks/         curated notebook examples; generated notebooks are run artifacts
docs/              workflow documentation and roadmap
.claude/skills/    repository-local Step0-Step3 skills

Citation

BRIDGE is research software under active development. If you use it in a study, please cite the repository and include the commit hash used for analysis.