Rui Xu1, Tianyang Xue1, Qiujie Dong1, Le Wan2, Zhe Zhu2, Peng Li3, Zhiyang Dou1, Cheng Lin4, Shiqing Xin5, Yuan Liu3, Wenping Wang6, Taku Komura1
Affiliations:
1 The University of Hong Kong
2 Tencent Visvise
3 Hong Kong University of Science and Technology
4 Macau University of Science and Technology
5 Shandong University
6 Texas A&M University
🚀 Official code of MeshMosaic: Scaling Artist Mesh Generation via Local-to-Global Assembly
🚀 We are preparing the codebase for public release. Stay tuned!
- Release pretrained checkpoints
- Release inference code
- Release data preprocessing code
- Release training code
We tested on A100, A800 and H20 GPUs with CUDA 12.4 and CUDA 11.8. Follow the steps below to set up the environment.
conda create -n MeshMosaic python=3.12 -y
conda activate MeshMosaic
# PyTorch 2.5.1 + CUDA 12.4
pip install --index-url https://download.pytorch.org/whl/cu124 \
torch==2.5.1+cu124 torchvision==0.20.1+cu124 torchaudio==2.5.1+cu124
# Core dependencies
pip install -U xformers==0.0.28.post3
pip install torch-cluster -f https://data.pyg.org/whl/torch-2.5.1+cu124.html
pip install packaginggit clone https://github.com/Dao-AILab/flash-attention
cd flash-attention
python setup.py install
# If you encounter build issues, try FlashAttention==2.8.0 or 2.7.3
cd csrc/rotary && pip install .
cd ../layer_norm && pip install .
cd ../xentropy && pip install .pip install pymeshlab jaxtyping boto3 trimesh beartype lightning safetensors \
open3d omegaconf sageattention triton scikit-image transformers gpustat \
wandb pudb
pip install libiglNote: For CUDA 11.8 users, install the corresponding PyTorch/cu118 wheels and compatible
torch-clusterbuild.
- The script
sample.shdemonstrates mesh generation. Input is an OBJ where each part is stored as a distinct connected component within a single file (see folderinput_pf). - Pre-trained weights are available on Hugging Face: here.
Or run directly with the following command:
torchrun --nproc-per-node=1 --master_port=61107 sampleGPCBD.py \
--model_path "ckpt/final.bin" \
--steps 40000 \
--input_path input_pf \
--output_path output \
--repeat_num 4 \
--uid_list "" \
--temperature 0.5If you do not have connected-component inputs, you can use the code in the PartField folder to convert your own mesh into an OBJ with semantic segmentation. Below is a concise workflow.
- Option A: Manual installation
conda create -n partfield python=3.10 -y
conda activate partfield
conda install -y nvidia/label/cuda-12.4.0::cuda
pip install psutil
pip install torch==2.4.0 torchvision==0.19.0 torchaudio==2.4.0 \
--index-url https://download.pytorch.org/whl/cu124
pip install lightning==2.2 h5py yacs trimesh scikit-image loguru boto3
pip install mesh2sdf tetgen pymeshlab plyfile einops libigl polyscope \
potpourri3d simple_parsing arrgh open3d
pip install torch-scatter -f https://data.pyg.org/whl/torch-2.4.0+cu124.html
sudo apt-get update && sudo apt-get install -y libx11-6 libgl1 libxrender1
pip install vtk- Option B: Use provided environment file
conda env create -f environment.yml
conda activate partfieldmkdir -p modelDownload the pretrained checkpoint: Trained on Objaverse.
Note: Due to licensing restrictions, the model also trained on PartNet cannot be released.
Run from the PartField project directory.
python partfield_inference.py -c configs/final/demo.yaml \
--opts continue_ckpt model/model_objaverse.ckpt \
result_name partfield_features/objaverse \
dataset.data_path data/objaverse_samplesWe use agglomerative clustering for mesh part segmentation.
python run_part_clustering.py \
--root exp_results/partfield_features/objaverse \
--dump_dir exp_results/clustering/objaverse \
--source_dir data/objaverse_samples \
--use_agglo True \
--max_num_clusters 30 \
--option 0python split_connected_component.pyPlease follow our progress for updates. Training code and more resources will be released soon!
Our code is based on these wonderful works:
If you find this work useful, please cite our paper:
@article{xu2025meshmosaic,
title={MeshMosaic: Scaling Artist Mesh Generation via Local-to-Global Assembly},
author={Xu, Rui and Xue, Tianyang and Dong, Qiujie and Wan, Le and Zhu, Zhe and Li, Peng and Dou, Zhiyang and Lin, Cheng and Xin, Shiqing and Liu, Yuan and others},
journal={arXiv preprint arXiv:2509.19995},
year={2025}
}