Session: Fast Algorithmic Frameworks for Semidefinite Programming and Applications
Chair: Ling Liang
Cluster: Conic and Semidefinite Optimization
Talk 1: Convex relaxation for quantum many-body physics
Speaker: Yuehaw Khoo
Abstract: In this talk, we explore adaptations of semidefinite programming relaxations for solving many-body physics problems. Our approach transforms a high-dimensional PDE problem into a convex optimization problem, setting it apart from traditional non-convex methods that rely on nonlinear re-parameterizations of the solution. For quantum mechanical systems, we present a convex program to obtain the ground state in terms of its moments. We further introduce a near-linear time algorithm for solving the convex program using hierarchical matrices.
Talk 2: Revisit ADMM for Solving Large-Scale SDPs: Theory and Practice
Speaker: Shucheng Kang
Abstract: First-order methods like the Alternating Direction Method of Multipliers (ADMM) have shown promise for solving large-scale semidefinite programs (SDPs), yet are often dismissed as low-accuracy solvers due to sublinear worst-case complexity and observed slow convergence. In this talk, we revisit ADMM for SDPs from both theoretical and computational standpoints. On the theory side, we challenge the conventional belief that ADMM cannot achieve high accuracy. We establish a new sufficient condition for local linear convergence: if the converged primal–dual solution satisfies strict complementarity, ADMM converges linearly, regardless of nondegeneracy. This insight significantly broadens the class of SDPs for which fast convergence can be expected. Extensive numerical evidence supports our theoretical claims. On the engineering side, we present cuADMM, a high-performance, GPU-accelerated ADMM solver for large-scale semidefinite programs (SDPs). At its core is a factorization-free method for projection onto the PSD cone using composite polynomial filtering, which bypasses the usual bottlenecks of eigenvalue decomposition. We demonstrate that cuADMM delivers strong speedups and robust performance across a variety of challenging SDP instances arising in real-world applications.
Talk 3: Exploring chordal sparsity in semidefinite programming with sparse plus low-rank data matrices
Speaker: Tianyun Tang
Abstract: Semidefinite programming (SDP) problems are challenging to solve because of their high dimensionality. However, solving sparse SDP problems with small tree-width are known to be relatively easier because: (1) they can be decomposed into smaller multi-block SDP problems through chordal conversion; (2) they have low-rank optimal solutions. In this paper, we study more general SDP problems whose coefficient matrices have sparse plus low-rank (SPLR) structure. We develop a unified framework to convert such problems into sparse SDP problems with bounded tree-width. Based on this, we derive rank bounds for SDP problems with SPLR structure, which are tight in the worst case.