DemoFunGrasp: Universal Dexterous Functional Grasping via Demonstration-Editing Reinforcement Learning
The official pytorch implementation of: DemoFunGrasp: Universal Dexterous Functional Grasping via Demonstration-Editing Reinforcement Learning (CVPR 2026)
DemoFunGrasp is a reinforcement learning framework for universal dexterous functional grasping. The learned policy generalizes to unseen combinations of objects and functional grasping conditions, and achieves zero-shot sim-to-real transfer. For the same object, the policy can produce diverse grasps by adjusting the grasping style and affordance.
It is recommended to use a conda environment to manage dependencies:
conda create -n demofungrasp python=3.8.19
conda activate demofungrasp
# For CUDA 11.8; you can change the version according to your GPU
pip install torch==2.3.0+cu118 torchvision==0.18.0+cu118 torchaudio==2.3.0 --index-url https://download.pytorch.org/whl/cu118Install Isaac Gym and IsaacGymEnvs:
# Download IsaacGym_Preview_4_Package.tar.gz from the NVIDIA website
tar -zxvf IsaacGym_Preview_4_Package.tar.gz
cd ./isaacgym/python
pip install -e .
cd ../../
git clone https://github.com/isaac-sim/IsaacGymEnvs.git
cd IsaacGymEnvs/
pip install -e .Install other required Python packages:
pip install -r requirements.txtDownload the object assets and textures from here and unzip them into the directory ./assets/.
We provide checkpoints in the checkpoint directory for both ShadowHand and Inspire Hand. You can run:
bash script/train_inspire_bash.sh
# For shadow hand, run:
bash scripts/train_shadow_bash.shYou can modify num_envs to fit your GPU memory and multiObjectList to select a subset of objects.
After modifying num_envs=5000, setting test=False, and changing +run_name to your desired experiment name, you can train your own policy by running:
bash script/train_inspire_bash.shUse TensorBoard to monitor training progress.
You can test the trained policy and record demonstrations by running:
bash scripts/state_based_demo.shYou can collect RGB observation datasets in various formats. To collect the Lerobot format, run:
bash scripts/collect_vision_dataset.shAfter collecting a sufficient number of trajectories (e.g., 30k), you can use any policy to imitate them.
To train and test on other object datasets, you must preprocess the .stl or .obj files to generate point clouds and additional information. Run the following scripts:
# Generate point clouds and bounding boxes
python dataset_processor/generate_object_dataset_pcl.py
# Generate object list
python dataset_processor/generate_urdfs_from_meshes.pyThis repository is built upon:
If you find this work useful, please consider citing:
@article{mao2025universal,
title={Universal Dexterous Functional Grasping via Demonstration-Editing Reinforcement Learning},
author={Mao, Chuan and Yuan, Haoqi and Huang, Ziye and Xu, Chaoyi and Ma, Kai and Lu, Zongqing},
journal={arXiv preprint arXiv:2512.13380},
year={2025}
}