Skip to content

JuliaDecisionFocusedLearning/CoolPDLP.jl

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

60 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CoolPDLP.jl

tests Build status Coverage code style: runic Aqua QA

docs:stable docs:dev DOI

A pure-Julia, hardware-agnostic parallel implementation of Primal-Dual hybrid gradient for Linear Programming (PDLP) and its variants.

This package is a work in progress, with many features still missing. Please reach out if it doesn't work to your satisfaction.

Getting started

Use Julia's package manager to install CoolPDLP.jl, choosing either the latest stable version

pkg> add CoolPDLP

or the development version

pkg> add https://github.com/JuliaDecisionFocusedLearning/CoolPDLP.jl

There are two ways to call the solver: either directly or via its JuMP.jl interface.

Use with JuMP

To use CoolPDLP with JuMP, select CoolPDLP.Optimizer and customize the options:

using CoolPDLP, JuMP, CUDA, CUDA.CUSPARSE

model = Model(CoolPDLP.Optimizer)
# Set `matrix_type` and `backend` to use GPU:
set_attribute(model, "matrix_type", CUSPARSE.CuSparseMatrixCSR)
set_attribute(model, "backend", CUDABackend())
# Build and solve model as usual

Why a new package?

There are already several open-source implementations of primal-dual algorithms for LPs (not to mention those in commercial solvers). Here is an incomplete list:

Package Hardware
FirstOrderLP.jl, or-tools CPU only
cuPDLP.jl, cuPDLP-c NVIDIA
cuPDLPx, cuPDLPx.jl NVIDIA
HPR-LP, HP-LP-C, HPR-LP-PYTHON NVIDIA
BatchPDLP.jl NVIDIA
HiGHS NVIDIA
cuopt NVIDIA
torchPDLP agnostic (via PyTorch)
MPAX agnostic (via JAX)

Unlike cuPDLP and most of its variants, CoolPDLP.jl uses KernelAbstractions.jl to target most common GPU architectures (NVIDIA, AMD, Intel, Apple), as well as plain CPUs. It also allows you to plug in your own sparse matrix types, or experiment with different floating point precisions. That's what makes it so cool.

References

PDLP: A Practical First-Order Method for Large-Scale Linear Programming, Applegate et al. (2025)

An Overview of GPU-based First-Order Methods for Linear Programming and Extensions, Lu & Yang (2025)

Roadmap

See the issue tracker for an overview of planned features.

Acknowledgements

Guillaume Dalle was partially funded through a state grant managed by Agence Nationale de la Recherche for France 2030 (grant number ANR-24-PEMO-0001).

This material is based upon work supported by the National Science Foundation AI Institute for Advances in Optimization (AI4OPT) under Grant No. 2112533 and the National Science Foundation Graduate Research Fellowship under Grant No. DGE-2039655. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.

About

A pure-Julia, hardware-agnostic parallel implementation of Primal-Dual hybrid gradient for Linear Programming (PDLP) and its variants.

Topics

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors

Languages