Loading…
Monday July 21, 2025 10:30am - 11:45am PDT
Session: GPU-Accelerated Mathematical Programming (Part 1)
Chair: Haihao Lu
Cluster: Computational Software

Talk 1: HPR-LP: An implementation of an HPR method for solving linear programming
Speaker: Defeng Sun
Abstract: In this talk, we aim to introduce an HPR-LP solver, an implementation of a Halpern Peaceman-Rachford (HPR) method with semi-proximal terms for solving linear programming (LP). We start with showing that the HPR method enjoys the highly desired iteration complexity of O(1/k) in terms of the Karush-Kuhn-Tucker residual and the objective error via the theory developed recently for accelerated degenerate proximal point methods. Based on the complexity results, we then design an adaptive strategy of restart and penalty parameter update to improve the efficiency and robustness of the HPR method. We conduct extensive numerical experiments on different LP benchmark datasets using NVIDIA A100-SXM4-80GB GPU in different stopping tolerances. Our solver's Julia version achieves a 2.39x to 5.70x speedup measured by SGM10 on benchmark datasets with presolve (2.03x to 4.06x without presolve) over the award-winning solver PDLP with the tolerance of 10^{-8}. Several practical techniques underlining the efficiency of solver will be highlighted.

Talk 2: Restarted Halpern PDHG for linear programming
Speaker: Jinwen Yang
Abstract: We propose and analyze a new matrix-free primal-dual algorithm, called restarted Halpern primal-dual hybrid gradient (rHPDHG), for solving linear programming (LP). We show that rHPDHG can achieve optimal accelerated linear convergence on feasible and bounded LP. Furthermore, we present a refined analysis that demonstrates an accelerated two-stage convergence of rHPDHG over the vanilla PDHG with an improved complexity for identification and an accelerated eventual linear convergence that does not depend on the conservative global Hoffman constant. Regarding infeasible LP, we show that rHPDHG can recover infeasibility certificates with an accelerated linear rate, improving the previous convergence rates. Furthermore, we discuss an extension of rHPDHG by adding reflection operation (which is dubbed as ), and demonstrate that it shares all theoretical guarantees of rHPDHG with an additional factor of 2 speedup in the complexity bound. Lastly, we build up a GPU-based LP solver, and the experiments showcase an improved numerical performance compared to cuPDLP.jl.

Talk 3: GPU-Accelerated Linear Programming and Beyond
Speaker: Haihao Lu
Abstract: In this talk, I will talk about the recent trend of research on new first-order methods for scaling up and speeding up linear programming (LP) and convex quadratic programming (QP) with GPUs. The state-of-the-art solvers for LP and QP are mature and reliable at delivering accurate solutions. However, these methods are not suitable for modern computational resources, particularly GPUs. The computational bottleneck of these methods is matrix factorization, which usually requires significant memory usage and is highly challenging to take advantage of the massive parallelization of GPUs. In contrast, first-order methods (FOMs) only require matrix-vector multiplications, which work well on GPUs and have already accelerated machine learning training during the last 15 years. This ongoing line of research aims to scale up and speed up LP and QP by using FOMs and GPUs. I’ll discuss how we can achieve this by explaining: (i) the behaviors of FOMs for LP; (ii) computational results on GPUs; (iii) theoretical results, including complexity theory and condition number theory, and how theory can lead to better computation and better understanding of the algorithm’s performance. If time permits, I’ll also talk about how to extend it to QP.

Speakers
DS

Defeng Sun

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
JY

Jinwen Yang

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
HL

Haihao Lu

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
Monday July 21, 2025 10:30am - 11:45am PDT
Taper Hall (THH) 210 3501 Trousdale Pkwy, 210, Los Angeles, CA 90089

Attendees (1)


Log in to save this to your schedule, view media, leave feedback and see who's attending!

Share Modal

Share this link via

Or copy link