Skip to content

Commit 0aef681

Browse files
committed
Update README, docs, and paths for new dependency structure
Rewrite README Requirements and Installation sections to reference requirements.txt as the primary install method. Remove references to the legacy embedded gaze-est/ venv (no longer shipped). Update all remaining old paths (Plugins/GazeTracking/MGaze/ → GazeTracking/ Backends/MGaze/) in .gitignore, test_pipeline.yaml, GUI, model_factory, THIRD_PARTY_LICENSES, and install_dependencies.py. Bump version reference to v0.3.0-beta.
1 parent 1c54f95 commit 0aef681

7 files changed

Lines changed: 47 additions & 37 deletions

File tree

.gitignore

Lines changed: 1 addition & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -35,11 +35,8 @@ Weights/
3535
.env
3636
venv/
3737

38-
# Embedded venvs (third-party submodules)
39-
Plugins/GazeTracking/MGaze/gaze-estimation/gaze/
40-
Plugins/GazeTracking/MGaze/gaze-estimation/gaze-est/
38+
# Embedded venvs (gaze-estimation submodule)
4139
GazeTracking/Backends/MGaze/gaze-estimation/gaze/
42-
GazeTracking/Backends/MGaze/gaze-estimation/gaze-est/
4340
GazeTracking/Backends/MGaze/gaze-estimation/weights/
4441
GazeTracking/Backends/MGaze/gaze-estimation/assets/
4542

GUI/MindSight_GUI.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -714,7 +714,7 @@ def _build_settings(self, lay):
714714
vl.addWidget(rb_row)
715715

716716
self._gaze_model = QLineEdit(
717-
str(_HERE / "GazeTracking" / "gaze-estimation" / "weights" / "mobileone_s0_gaze.onnx"))
717+
str(_HERE / "GazeTracking" / "Backends" / "MGaze" / "gaze-estimation" / "weights" / "mobileone_s0_gaze.onnx"))
718718
gm_btn = _browse_btn()
719719
gm_btn.clicked.connect(lambda: self._browse_to(self._gaze_model, "*.onnx *.pt"))
720720
model_row = QWidget(); model_lay = QFormLayout(model_row)

ObjectDetection/model_factory.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -88,7 +88,7 @@ def create_yolo_detector(
8888

8989
def create_face_detector():
9090
"""Create and return a RetinaFace instance."""
91-
_GAZE_DIR = Path(__file__).parent.parent / "GazeTracking" / "gaze-estimation"
91+
_GAZE_DIR = Path(__file__).parent.parent / "GazeTracking" / "Backends" / "MGaze" / "gaze-estimation"
9292
if str(_GAZE_DIR) not in sys.path:
9393
sys.path.insert(0, str(_GAZE_DIR))
9494
from uniface import RetinaFace

README.md

Lines changed: 38 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# MindSight — Unified Eye-Gaze Intersection Tracker for Behavioural Neuroscience Research
22

3-
> **Beta Release** — This is a pre-release version (v0.2.0-beta). APIs and features may change. Bug reports and feedback are welcome via [GitHub Issues](https://github.com/kylen-d/mindsight/issues).
3+
> **Beta Release** — This is a pre-release version (v0.3.0-beta). APIs and features may change. Bug reports and feedback are welcome via [GitHub Issues](https://github.com/kylen-d/mindsight/issues).
44
55
[![License: AGPL v3](https://img.shields.io/badge/License-AGPLv3-blue.svg)](https://www.gnu.org/licenses/agpl-3.0)
66

@@ -97,18 +97,25 @@ Camera / Video / Image
9797

9898
### Python packages
9999

100+
All dependencies are listed in `requirements.txt`. Key packages:
101+
100102
```
101-
opencv-python
102-
numpy
103-
torch
104-
torchvision
105-
onnxruntime # or onnxruntime-gpu for CUDA
106-
ultralytics # YOLO / YOLOE
107-
uniface # RetinaFace face detector
108-
PyQt6 # GUI only
103+
torch / torchvision # Deep learning
104+
onnxruntime # ONNX inference (or onnxruntime-gpu for CUDA)
105+
ultralytics # YOLO / YOLOE object detection
106+
clip # Ultralytics CLIP fork (visual prompts)
107+
uniface # RetinaFace face detector
108+
timm # PyTorch Image Models (UniGaze backend)
109+
opencv-python # Computer vision
110+
matplotlib # Charts and dashboard rendering
111+
pandas # Data output
112+
PyQt6 # GUI
113+
PyYAML # Pipeline configuration
114+
tqdm # Progress bars
115+
Pillow # Image handling
109116
```
110117

111-
> **Note:** The `gaze-estimation` submodule ships its own virtual environment (`gaze-est/`). You do **not** need to activate it separately — MindSight inserts the `gaze-estimation/` directory into `sys.path` automatically at runtime.
118+
> **Note:** The UniGaze backend requires `pip install unigaze` separately (non-commercial license, pins `timm==0.3.2`).
112119
113120
---
114121

@@ -117,8 +124,8 @@ PyQt6 # GUI only
117124
### 1. Clone the repository
118125

119126
```bash
120-
git clone <repo-url>
121-
cd MindSight
127+
git clone https://github.com/kylen-d/mindsight.git
128+
cd mindsight
122129
```
123130

124131
### 2. Create and activate a virtual environment (recommended)
@@ -132,25 +139,31 @@ source .venv/bin/activate # macOS / Linux
132139
### 3. Install dependencies
133140

134141
```bash
135-
pip install opencv-python numpy torch torchvision
136-
pip install onnxruntime # CPU
137-
# pip install onnxruntime-gpu # NVIDIA GPU
138-
pip install ultralytics
139-
pip install uniface
140-
pip install PyQt6 # GUI only
142+
pip install -r requirements.txt
141143
```
142144

143-
**macOS (Apple Silicon) — CoreML acceleration:**
145+
**GPU acceleration (optional):** Install PyTorch with CUDA support *before* running the above — see [pytorch.org/get-started](https://pytorch.org/get-started/locally/). For Apple Silicon CoreML, replace `onnxruntime` with `onnxruntime-silicon`.
146+
147+
Alternatively, use the platform-aware helper:
144148

145149
```bash
146-
pip install onnxruntime-silicon # or onnxruntime with CoreML support
150+
python install_dependencies.py # auto-detects CUDA / Apple Silicon
147151
```
148152

149153
### 4. Download gaze model weights
150154

151-
The default gaze model is `gaze-estimation/weights/mobileone_s0_gaze.onnx`. Other ONNX and PyTorch weights (`resnet18`, `resnet34`, `resnet50`) are already included in the `gaze-estimation/weights/` directory.
155+
MGaze weights are stored in `GazeTracking/Backends/MGaze/gaze-estimation/weights/`. Download them with:
156+
157+
```bash
158+
cd GazeTracking/Backends/MGaze/gaze-estimation
159+
bash download.sh
160+
```
161+
162+
For L2CS-Net, download weights to `GazeTracking/Backends/L2CS/weights/` and pass the path via `--l2cs-model`.
163+
164+
For Gazelle, download a checkpoint separately and pass it via `--gazelle-model`.
152165

153-
To use the Gazelle backend, download a checkpoint separately and pass it via `--gazelle-model`.
166+
For UniGaze, install separately (`pip install unigaze`) and pass the model variant via `--unigaze-model`.
154167

155168
### 5. YOLO weights
156169

@@ -268,13 +281,13 @@ python MindSight.py --source video.mp4 --summary results.csv
268281
|---|---|---|
269282
| `--source` | `0` | Input: `0` = webcam, integer = camera index, path to video/image |
270283
| `--model` | `yolov8n.pt` | YOLO model weights |
271-
| `--mgaze-model` | `gaze-estimation/weights/mobileone_s0_gaze.onnx` | MGaze: ONNX or `.pt` gaze weights |
284+
| `--mgaze-model` | `GazeTracking/Backends/MGaze/gaze-estimation/weights/mobileone_s0_gaze.onnx` | MGaze: ONNX or `.pt` gaze weights |
272285
| `--mgaze-arch` | `None` | MGaze: Required for `.pt` models: `resnet18`, `resnet34`, `resnet50`, `mobilenetv2`, `mobileone_s0``s4` |
273286
| `--mgaze-dataset` | `gaze360` | MGaze: Dataset config used for `.pt` models |
274287
| `--l2cs-model` | `None` | L2CS-Net: Path to `.pkl` or `.onnx` weights |
275288
| `--l2cs-arch` | `ResNet50` | L2CS-Net: Architecture (`ResNet18``ResNet152`) |
276289
| `--l2cs-dataset` | `gaze360` | L2CS-Net: Dataset config key |
277-
| `--unigaze-model` | `None` | UniGaze: Model variant (requires `pip install unigaze timm==0.3.2`) |
290+
| `--unigaze-model` | `None` | UniGaze: Model variant (requires `pip install unigaze` separately) |
278291
| `--conf` | `0.35` | YOLO detection confidence threshold |
279292
| `--classes` | `None` | Filter YOLO to specific class names, e.g. `--classes person knife` |
280293
| `--blacklist` | `[]` | Suppress specific YOLO classes, e.g. `--blacklist chair` |
@@ -468,7 +481,7 @@ When `--save` is passed, an annotated `.mp4` is written alongside the source, or
468481
| **MGaze ONNX** (default) | `--mgaze-model` with `.onnx` path | Fastest; uses CoreML on Apple Silicon, CUDA on NVIDIA, CPU otherwise |
469482
| **MGaze PyTorch** | `--mgaze-model` with `.pt` + `--mgaze-arch` | Requires `--mgaze-arch` to identify the architecture |
470483
| **L2CS-Net** | `--l2cs-model <weights.pkl>` | ResNet50 with dual classification heads; ~3x more accurate than MGaze on MPIIGaze (3.92 vs ~11 deg MAE) |
471-
| **UniGaze** (optional) | `--unigaze-model <variant>` | ViT + MAE pre-training; best cross-dataset accuracy (~9.4 deg Gaze360). Requires `pip install unigaze timm==0.3.2` (non-commercial license) |
484+
| **UniGaze** (optional) | `--unigaze-model <variant>` | ViT + MAE pre-training; best cross-dataset accuracy (~9.4 deg Gaze360). Requires `pip install unigaze` separately (non-commercial license) |
472485
| **Gazelle** | `--gazelle-model <ckpt.pt>` | Scene-level DINOv2 model; processes all faces in a single forward pass; outputs a gaze heatmap rather than pitch/yaw |
473486

474487
**MGaze architectures** (`--mgaze-arch`):

THIRD_PARTY_LICENSES.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,8 +8,8 @@ This file documents their licenses for compliance purposes.
88
### MGaze / gaze-estimation
99
- **License:** MIT
1010
- **Copyright:** (c) 2024 Yakhyokhuja Valikhujaev
11-
- **Location:** `Plugins/GazeTracking/MGaze/gaze-estimation/`
12-
- **Full license:** `Plugins/GazeTracking/MGaze/gaze-estimation/LICENSE`
11+
- **Location:** `GazeTracking/Backends/MGaze/gaze-estimation/`
12+
- **Full license:** `GazeTracking/Backends/MGaze/gaze-estimation/LICENSE`
1313

1414
### Gazelle (Gaze-LLE)
1515
- **License:** MIT

install_dependencies.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -2,9 +2,9 @@
22
"""
33
MindSight dependency installer.
44
5-
Installs all packages required by:
6-
- gaze_tracker.py / ObjectDetection/YOLO/yolo_tracking.py (the MindSight orchestration layer)
7-
- GazeTracking/Backends/MGaze/gaze-estimation/ (the embedded gaze-estimation submodule)
5+
Installs all packages required by MindSight. Dependencies are also listed
6+
in requirements.txt — this script adds platform-aware PyTorch installation
7+
(CUDA, Apple Silicon, or CPU).
88
99
Usage:
1010
python install_dependencies.py

test_pipeline.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -42,6 +42,6 @@ output:
4242
# Plugin-specific settings — keys map directly to CLI flags (hyphens or underscores).
4343
# Any plugin argument works here without needing a hardcoded mapping.
4444
plugins:
45-
mgaze_model: "Plugins/GazeTracking/MGaze/gaze-estimation/weights/resnet50_gaze.onnx"
45+
mgaze_model: "GazeTracking/Backends/MGaze/gaze-estimation/weights/resnet50_gaze.onnx"
4646
mgaze_arch: "resnet50"
4747
novel_salience: true

0 commit comments

Comments
 (0)