Jan Held, Renaud Vandeghen, Sanghyun Son, Daniel Rebain, Matheus Gadelha, Yi Zhou, Ming C. Lin, Marc Van Droogenbroeck, Andrea Tagliasacchi
This repo contains the official implementation for the paper "Triangle Splatting+: Differentiable Rendering with Opaque Triangles".
The code has been used and tested with Python 3.11 and CUDA 12.6.
You should clone the repository with the different submodules by running the following command:
git clone https://github.com/trianglesplatting2/triangle-splatting2 --recursive
cd triangle-splatting2Then, we suggest to use a virtual environment to install the dependencies.
micromamba create -n triangle-splatting2 python=3.11
micromamba activate triangle-splatting2
micromamba install nvidia/label/cuda-12.6.0::cuda
pip install torch==2.7.1 torchvision==0.22.1
pip install -r requirements.txtFinally, you can compile the custom CUDA kernels by running the following command:
bash compile.sh
cd submodules/simple-knn
pip install .To install the Delaunay triangulation module (adapted from RadFoam), you can run the following commands:
cmake -S . -B build -DCMAKE_INSTALL_PREFIX="$(pwd)/triangulation"
cmake --build build
cmake --install buildYou may need to specify the location of your pybind11 installation with the -Dpybind11_DIR flag if CMake cannot find it automatically.
To train our model, you can use the following command:
python train.py -s <path_to_scenes> -m <output_model_path> --evalIf you want to train the model on indoor scenes, you should add the following command:
python train.py -s <path_to_scenes> -m <output_model_path> --indoor --evalTo run the full evaluation on MipNeRF-360, you can use the following command:
bash bash_scripts/run_all.sh <path_to_save>Note that this command assumes you are using a machine with slurm. Alternatively, you can run the full evaluation without slurm by using the following command:
python full_eval.py --mipnerf360 <path_to_mipnerf360> --output_path <path_to_save>To render a scene, you can use the following command:
python render.py -m <path_to_model>To create a video, you can use the following command:
python create_video.py -m <path_to_model> -s <path_to_scenes>To save your optimized scene after training, just run:
python create_ply.py <output_model_path>
If you want to run some scene on a game engine for yourself, you can download the Garden, Bicycle or Truck scenes from the following link. To achieve the highest visual quality, you should use 4× supersampling.
If you want to try out physics interactions or explore the environment with a character, you can download the Unity project from the link below: link. To achieve the highest visual quality, you should use 4× supersampling.
First, you need to create a mask of the objects you want to extract. We created a lightweight utiliy script to create a json file for a given image.
python annotate_points_boxes.py <Image.png>This code relies on Segment Anything Model 2 (SAM). You can follow the instructions in the repository to install it, or run the following command to install it automatically:
pip install 'git+https://github.com/facebookresearch/sam2.git'Run the following command to get the model weights:
./checkpoints/download_ckpts.sh
To extract only the triangles corresponding to a specific object, run the following commands:
1. python -m segmentation.extract_images -s <path_to_scenes> -m <path_to_model> --eval
2. python -m segmentation.sam_mask_generator_json --data_path <path_to_images> --save_path <path_to_save_masks> --json_path <path_to_json_file>
3. python -m segmentation.segment -s <path_to_scenes> -m <path_to_model> --eval --path_mask <path_to_masks> --object_id <object_id>
4. python -m segmentation.run_single_object -s <path_to_scenes> -m <path_to_model> --eval --ratio_threshold 0.90
5. python -m segmentation.create_ply <path_to_model>
The --ratio_threshold parameter controls how confidently triangles are considered part of the object. Higher values render only triangles that are very likely to belong to the object, while lower values are recommended for object removal and higher values for object extraction.
- Extracts the training views used for segmentation.
- Runs SAM on each view to generate object masks.
- Identifying which triangles belong to the selected object.
- Loads and renders only the triangles belonging to the object on the training views.
- Saves the extracted triangles as PLY file.
Check out related work that led to our project:
- Triangle Splatting for Real-Time Radiance Field Rendering
- 3D Convex Splatting: Radiance Field Rendering with 3D Smooth Convexes
- DMesh++: An Efficient Differentiable Mesh for Complex Shapes
- DMesh: A Differentiable Mesh Representation
- MiLo: Mesh-In-the-Loop Gaussian Splatting for Detailed and Efficient Surface Reconstruction
If you find our work interesting or use any part of it, please cite our paper:
@article{Held2025Triangle2,
title = {Triangle Splatting+: Differentiable Rendering with Opaque Triangles},
author = {Held, Jan and Vandeghen, Renaud and Son, Sanghyun and Rebain, Daniel and Gadelha, Matheus and Zhou, Yi and Lin, Ming C. and Van Droogenbroeck, Marc and Tagliasacchi, Andrea},
journal = {arXiv},
year = {2025}
}And related work that strongly motivated and inspired Triangle Splatting+:
@article{Held2025Triangle,
title = {Triangle Splatting for Real-Time Radiance Field Rendering},
author = {Held, Jan and Vandeghen, Renaud and Deliege, Adrien and Hamdi, Abdullah and Cioppa, Anthony and Giancola, Silvio and Vedaldi, Andrea and Ghanem, Bernard and Tagliasacchi, Andrea and Van Droogenbroeck, Marc},
journal = {arXiv},
year = {2025},
}@InProceedings{held20243d,
title={3D Convex Splatting: Radiance Field Rendering with 3D Smooth Convexes},
author={Held, Jan and Vandeghen, Renaud and Hamdi, Abdullah and Deliege, Adrien and Cioppa, Anthony and Giancola, Silvio and Vedaldi, Andrea and Ghanem, Bernard and Van Droogenbroeck, Marc},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2025},
}J. Held is funded by the F.R.S.-FNRS. The present research benefited from computational resources made available on Lucia, the Tier-1 supercomputer of the Walloon Region, infrastructure funded by the Walloon Region under the grant agreement n°1910247.
