Skip to content

SangMin316/HyFI

Repository files navigation

HyFI: Hyperbolic Feature Interpolation for Brain-Vision Alignment

This is the official code repository for our AAAI 2026 paper: HyFI: Hyperbolic Feature Interpolation for Brain-Vision Alignment

Motivation

We use hyperbolic space to tackle two challenges in brain–vision alignment: information imbalance and feature entanglement. Because representational capacity decreases near the origin and geodesics curve toward it, interpolating between semantic and perceptual embeddings along a hyperbolic geodesic achieves compression and fusion, thereby effectively mitigating two problems.

motivation (a) An illustration of the human visual system and neural signal acquisition. Semantic and perceptual visual information is processed in the brain, but information degradation occurs when neural activity is recorded. (b) Previous works aligned semantic and perceptual features through separate pathways, overlooking their entanglement in brain signals. (c) In contrast, interpolation in hyperbolic space enables the integration of perceptual and semantic visual features while naturally reducing representational complexity, thus facilitating better alignment with brain signals.

📁 Repository Structure

HyFI/                           # Root directory
├── README.md
├── Analysis                   # Some analysis files
│   ├── check_the_retrieval.py # Retrieval results
│   └── plot_feature_dis.py    # Plot distribution of feature's distance from rooot 
├── base                       # Core implementation files
│   ├── data.py                # Data loading
│   ├── eeg_backbone.py        # EEG encoder backbone 
│   ├── inpating_data.py       # Inpainting data module
│   └── utils.py               # Utility functions including loss
│   └── hycoclip               # Inpainting data module
│       ├── checkpoints        # Check point for pre-train models
│       ├── encoders           
│       │   ├── image_encoders.py # Image encdoer for hycoclip
│       │   └── text_encoders.py  # Iext encdoer for hycoclip
│       ├── utils
│       │   ├── timer.py       
│       │   └── distributed.py 
│       ├── lorentz.py         # Lorentz manifold operations
│       ├── models.py          # MERU and HyCoCLIP models
│       └── tokenizer.py       # Tokenizer
├── configs
│   ├── MEG.yaml               # Configuration for MEG experiments
│   └── EEG.yaml               # Configuration for EEG experiments
├── exp                        # Directory for experiment results
├── preprocess
│   ├── process_eeg_whiten.py  # Script to preprocess and whiten EEG data
│   └── process_resize.py      # Script to resize image dataset
├── main.py                    # Main script for running experiments for HyFI
├── main_CLIP.py               # Main script for running experiments for CLIP interpolation
└── requirements.txt           # List of required Python packages

Environment Setup

  • Python 3.9
  • Cuda 12.4
  • PyTorch 2.6
  • pytorch-lightning==2.5.1
  • Required libraries are listed in requirements.txt.
conda create -n hyfi
conda activate hyfi
pip install -r requirements.txt

Data Preparation

Download the THINGS-Image dataset from the OSF repository and the THINGS-EEG dataset from the OSF repository, then place them in your data directory.

Some previous works also provide preprocessed versions of these datasets on Hugging Face.
You can refer to the implementation available on GitHub.

Image Feature Preparation

We prepare the visual features from a pre-trained image encoder for efficiency.

Make the low-level CLIP feature (using Gaussian blur)

python Extract_CLIP_embedding_lowlevel.py

Make the high-level CLIP feature (using Fovea blur)

python Extract_CLIP_embedding.py
  • Before running any scripts, make sure that the dataset path is correctly set in the code or configuration file.

Running the Code

python main.py

Acknowledgements

We would like to acknowledge the use of the following publicly available datasets:

This codebase is inspired by several previous works in neural decoding:

This codebase is inspired by several previous works in hyperbolic representation learning:

Citation

If you find this work useful, please cite:

@inproceedings{jo2026hyfi,
  author    = {Jo, Sangmin and Jeong, Wootaek and Heo, Da-Woon and Hwang, Yoohwan and Suk, Heung-Il},
  title     = {HyFI: Hyperbolic Feature Interpolation for Brain-Vision Alignment},
  booktitle = {Proceedings of the AAAI Conference on Artificial Intelligence (AAAI)},
  year      = {2026},
}

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages