Session: Recent Advances in Fixed-Point Methods and Stability in Optimization Problems
Chair: Chao Ding
Cluster: Fixed Points and Variational Inequalities
Talk 1: HPR-QP: An implementation of an HPR method for solving quadratic programming
Speaker: Kaihuang Chen
Abstract: This talk introduces HPR-QP, a solver based on a Halpern Peaceman-Rachford (HPR) method for solving quadratic programming (QP). The iteration complexity of the algorithm is $O(1/k)$ in terms of the Karush-Kuhn-Tucker residual and the objective error. We compare the performance of HPR-QP and other solvers on extensive QP datasets.
Talk 2: Characterizations of Tilt-Stable Local Minimizers of a Class of Matrix Optimization Problems
Speaker: Shiwei Wang
Abstract: As a fundamental perturbation property, tilt stability has been widely studied as it can deeply characterize the difficulty of a problem and reveal the good behavior of multiplier methods for certain problem. In this talk, we mainly focus on establishing a new characterization of tilt stability via the newly proposed quadratic bundle. By calculating the explicit form of the minimal quadratic bundle of the polyhedral spectral function, we can further obtain the equivalence between tilt stability and strong second order sufficient condition under nondegeneracy for general composite optimization problem.
Talk 3: On the ergodic convergence properties of the Peaceman-Rachford method and their applications in solving linear programming
Speaker: Guojun Zhang
Abstract: In this talk, we study the ergodic convergence properties of the Peaceman-Rachford (PR) method with semi-proximal terms for solving convex optimization problems (COPs). For the first time, we establish the global convergence of the ergodic sequence of the PR method with semi-proximal terms by leveraging the theory of the degenerate proximal point method. This result represents a significant departure from previous studies on the non-ergodic convergence of the PR method, which typically require strong convexity or monotonicity conditions that are not generally satisfied in COPs. Moreover, we demonstrate an ergodic iteration complexity of $O(1/k)$ of the PR method with semi-proximal terms, measured by the objective error and the Karush–Kuhn–Tucker residual using the $\varepsilon$-subdifferential. Based on these convergence properties, we introduce EPR-LP, using the ergodic sequence of the PR method with semi-proximal terms for solving linear programming (LP) problems. EPR-LP incorporates an adaptive restart strategy and dynamic penalty parameter updates for efficiency and robustness. Extensive numerical experiments on LP benchmark datasets, executed on a high-performance GPU, show that our Julia-based solver outperforms the award-winning solver PDLP at a tolerance level of $10^{-8}$.