Silent-Wear is an end-to-end, fully open-source wearable system for vocalized and silent speech detection from surface electromyography (sEMG) data.
![]() |
![]() |
Silent-Wear has been developed at ETH Zürich, by the PULP-Bio team:
-
Giusy Spacone: Conceptualization, Experimental Design, Development
-
Sebastian Frey: PCB design, Firmware, Documentation
-
Fiona Meier: Hardware Development
-
Giovanni Pollo: Experimental Desing, Data Collection, Documentation
-
Prof. Luca Benini: Supervision, Conceptualization
-
Dr. Andrea Cossettini: Supervision, Project administration
Silent-Wear relies on the following building blocks:
🔧 BIOGAP-Ultra — an ultra-low-power acquisition system for biopotential acquisition. Hardware and firmware: https://github.com/pulp-bio/BioGAP
📿 Silent-Wear neckband — a 14-channel differential EMG neckband. System overview: https://ieeexplore.ieee.org/abstract/document/11330464 (arXiv: https://arxiv.org/abs/2509.21964)
🖥️ BIOGUI — a modular PySide6 GUI for acquiring and visualizing biosignals from multiple sources, and for managing data collection. Version used in this work: https://github.com/pulp-bio/biogui/tree/sensors_speech
📝 This repository This repository contains the source code used to preprocess EMG data and develop models that predict 8 HMI commands from vocalized and silent EMG, in line with the associated paper (arXiv: coming soon).
Specifically, it allows you to:
- Preprocess EMG data and prepare it for model training using our publicly available dataset: https://huggingface.co/datasets/PulpBio/SilentWear
- Replicate the results reported in the paper (arXiv: coming soon). See details below.
- Extend the pipeline with your own models (instructions below).
Start by creating a dedicated virtual environment:
If using conda
conda create -n silent_wear python=3.11.9
conda activate silent_wearIf using venv
python3.11 -m venv silent_wear
source silent_wear/bin/activateClone this repository and install the required dependencies:
git clone <REPO_URL>
cd SilentWear
pip install -r requirements.txtYou can download the data used in this work from: https://huggingface.co/datasets/PulpBio/SilentWear
The code expects the Hugging Face release layout:
SilentWear/
├── data_raw_and_filt/
└── wins_and_features/
Further description on the content of the dataset are available at: https://huggingface.co/datasets/PulpBio/SilentWear/blob/main/README.md Before running the experiments, updates the data paths in:
config/paper_models_config.yaml
config/create_windows.yamlIf you want to collect your own data using the BioGUI, see Optional: raw data preprocessing below.
The reproduce_paper_scripts folder allows to reproduce the results of the paper: (arXiv: coming soon) .
cd reproduce_paper_scripts
python 20_make_windows_and_features.py --data_dir ./path_to_your_dataThis script is responsible to:
-
Reading EMG recordings (saved as .h5 files)
-
Generates time windows with user-selectable lengths
-
(Optionally) extracting time-domain and frequency-domain features for classical ML models
In our work, we conduct four experiments:
Train Random Forest models:
python reproduce_paper_scripts/30_run_experiments.py --base_config config/paper_models_config.yaml --model_config config/models_configs/random_forest_config.yaml --data_dir ./data --artifacts_dir ./artifacts --experiment globalTrain SpeechNet models:
python reproduce_paper_scripts/30_run_experiments.py --base_config config/paper_models_config.yaml --model_config config/models_configs/speechnet_config.yaml --data_dir ./data --artifacts_dir ./artifacts --experiment globalTrain Random Forest models:
python reproduce_paper_scripts/30_run_experiments.py --base_config config/paper_models_config.yaml --model_config config/models_configs/random_forest_config.yaml --data_dir ./data --artifacts_dir artifacts --experiment inter_session --inter_session_windows_s 1.4Train SpeechNet models:
python reproduce_paper_scripts/30_run_experiments.py --base_config config/paper_models_config.yaml --model_config config/models_configs/speechnet_config.yaml --data_dir ./data --artifacts_dir artifacts --experiment inter_sessionNote: this will run by default all the ablations on the window size. Window sizes: [0.4, 0.6, 0.8, 1.0, 1.2, 1.4].
You can pass a single float value to inter_session_windows_s if you want to train only on one specific window size.
python reproduce_paper_scripts/30_run_experiments.py --base_config config/paper_models_config.yaml --model_config config/models_configs/speechnet_config.yaml --data_dir ./data --artifacts_dir artifacts --experiment train_from_scratch --tfs_config config/paper_train_from_scratch_config.yaml --tfs_windows_s 1.4Adjust tfs_windows_s to select a different window size.
python reproduce_paper_scripts/30_run_experiments.py --base_config config/paper_models_config.yaml --model_config config/models_configs/speechnet_config.yaml --data_dir ./data --artifacts_dir artifacts --experiment inter_session_ft --ft_config config/paper_ft_config.yaml --ft_windows_s 1.4Adjust ft_windows_s to select a different window size.
Run these commands to generate the results
Random Forest:
python utils/III_results_analysis/I_global_intersession_analysis.py --artifacts_dir ./artifacts --experiment global --model_name random_forest --model_name_id w1400msSpeechNet:
python utils/III_results_analysis/I_global_intersession_analysis.py --artifacts_dir ./artifacts --experiment global --model_name speechnet --model_name_id w1400ms --plot_confusion_matrix --transparentSwitch experiment between global and inter_session.
python utils/III_results_analysis/II_infotransrate.py --artifacts_dir ./artifacts --experiment inter_session --model_name speechnetpython utils/III_results_analysis/III_ft_results.py --artifacts_dir ./artifacts --model_name speechnet --model_base_id w1400ms --inter_session_model_id model_1 --ft_id ft_config_0 --bs_id bs_config_0Note:
If you ran multiple fine tuning or baseline rounds for the same window size, adjust ft_id and bs_id accordingly. If you ran the inter session models multiple times, change the inter_session_model_id
Note: Small performance variations may occur due to randomness but remain within the reported standard deviation.
The reproduce_paper_scripts folder is built around the standalone scripts contained in:
-
utils/II_feature_extractionandutils/III_results_analysis -
offline_experiments
The scripts in these folder can be ran independently.
They can be used as a starting point to test your own model.
If you recorded new data using the BioGUI, you can convert your .bio recordings to .h5 using:
utils/I_data_preparation/data_preparation.py
Then run windowing/feature extraction as above.
Silent-Wear aims to foster a community-driven effort toward advancing EMG-based Human–Machine Interfaces (HMI).
We strongly encourage contributions from researchers, developers, and practitioners.
You can contribute in several ways:
You can replicate the data collection protocol using the open-source BIOGUI platform:
https://github.com/pulp-bio/biogui/tree/sensors_speech
We welcome:
- New subjects
- Additional commands
- Different recording conditions
- Cross-lingual or multilingual datasets
If you collect new data, please open an issue to discuss integration.
To integrate a new model:
- Add your configuration file under
config/models_configs/ - Implement your model in the
models/directory - Add your model factory to
models/models_factory.pyfile - Submit a pull request with a short description of your approach and results
Contributions are also welcome for:
- Data preprocessing
- Feature extraction
- Evaluation protocols
- Documentation improvements
- Bug fixes and performance optimizations
If you use this work, we strongly encourage you to cite:
@online{spacone_silentwear_26,
author = {Spacone, Giusy and Frey, Sebastian and Pollo, Giovanni and Burrello, Alessio and Pagliari, J. Daniele and Kartsch, Victor and Cossettini, Andrea and Benini, Luca},
title = {SilentWear: An Ultra-Low Power Wearable System for EMG-Based Silent Speech Recognition},
year = {2026},
url = {coming soon}
}@INPROCEEDINGS{meier_wearneck_26,
author={Meier, Fiona and Spacone, Giusy and Frey, Sebastian and Benini, Luca and Cossettini, Andrea},
booktitle={2025 IEEE SENSORS},
title={A Parallel Ultra-Low Power Silent Speech Interface Based on a Wearable, Fully-Dry EMG Neckband},
year={2025},
volume={},
number={},
pages={1-4},
keywords={Wireless communication;Vocabulary;Wireless sensor networks;Accuracy;Low power electronics;Electromyography;Robustness;Decoding;Wearable sensors;Textiles;EMG;wearable;ultra-low power;HMI;speech;silent speech},
doi={10.1109/SENSORS59705.2025.11330464}}@ARTICLE{11346484,
author={Frey, Sebastian and Spacone, Giusy and Cossettini, Andrea and Guermandi, Marco and Schilk, Philipp and Benini, Luca and Kartsch, Victor},
journal={IEEE Transactions on Biomedical Circuits and Systems},
title={BioGAP-Ultra: A Modular Edge-AI Platform for Wearable Multimodal Biosignal Acquisition and Processing},
year={2026},
volume={},
number={},
pages={1-17},
keywords={Electrocardiography;Biomedical monitoring;Monitoring;Electromyography;Electroencephalography;Artificial intelligence;Heart rate;Estimation;Temperature measurement;Hardware;biopotential;ExG;photoplethysmogram;Human-Machine Interface;sensor fusion},
doi={10.1109/TBCAS.2026.3652501}}This project makes use of the following licenses:
-
Apache License 2.0 — see the LICENSE file for details.
-
Images (
extras/) are under the the Creative Commons Attribution 4.0 International License - see the LICENSE_IMG file for details.





