Loading…
Wednesday July 23, 2025 4:15pm - 5:30pm PDT
Session: Novel First-Order Methods via Performance Estimation I
Chair: Alex Wang
Cluster: Conic and Semidefinite Optimization

Talk 1: Nonlinear Conjugate Gradient Methods: Worst-case Convergence Rates via Computer-assisted Analyses
Speaker: Shuvomoy Das Gupta
Abstract: We propose a computer-assisted approach to the analysis of the worst-case convergence of nonlinear conjugate gradient methods (NCGMs). Those methods are known for their generally good empirical performances for large-scale optimization, while having relatively incomplete analyses. Using our computer-assisted approach, we establish novel complexity bounds for the Polak-Ribière-Polyak (PRP) and the Fletcher- Reeves (FR) NCGMs for smooth strongly convex minimization. In particular, we construct mathematical proofs that establish the first non- asymptotic convergence bound for FR (which is historically the first developed NCGM), and a much improved non-asymptotic convergence bound for PRP. Additionally, we provide simple adversarial examples on which these methods do not perform better than gradient descent with exact line search, leaving very little room for improvements on the same class of problems.

Talk 2: Convergence Rate of Boosted Difference of Convex Algorithm (BDCA)
Speaker: Hadi Abbaszadehpeivasti
Abstract: In recent years, the Difference of Convex Algorithm (DCA) has received significant attention for its ability in solving a wide range of non-convex optimization problems. Its efficiency and flexibility are enhanced through appropriate decompositions, which show that other methods, such as the fixed-step gradient method and the proximal gradient method, can be viewed as special cases of DCA. A variant of DCA, called boosted DCA, has been developed in recent years which outperforms the standard DCA in practice. In this talk, I will discuss the worst-case convergence rate of the boosted DCA, a topic that has limited study on its worst-case convergence rate.

Talk 3: Composing Optimized Stepsize Schedules for Gradient Descent
Speaker: Alex Wang
Abstract: Recent works by Altschuler and Parrilo and Grimmer, Shu, and Wang have shown that it is possible to accelerate the convergence of gradient descent on smooth convex functions, even without momentum, just by picking special stepsizes. In this paper, we provide a general theory for composing stepsize schedules capturing all recent advances in this area and more. We propose three notions of ``composable'' stepsize schedules with elementary associated composition operations for combining them. From these operations, in addition to recovering recent works, we construct three highly optimized sequences of stepsize schedules. We first construct optimized stepsize schedules of every length generalizing the exponentially spaced silver stepsizes of Altschuler and Parrilo. We then construct highly optimized stepsizes schedules for minimizing final objective gap or gradient norm, improving on prior rates by constants and, more importantly, matching or beating the numerically computed minimax optimal schedules of Das Gupta et al.. We conjecture these schedules are in fact minimax (information theoretic) optimal.

Speakers
SD

Shuvomoy Das Gupta

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
HA

Hadi Abbaszadehpeivasti

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
AW

Alex Wang

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
Wednesday July 23, 2025 4:15pm - 5:30pm PDT
Taper Hall (THH) 119 3501 Trousdale Pkwy, 119, Los Angeles, CA 90089

Attendees (1)


Log in to save this to your schedule, view media, leave feedback and see who's attending!

Share Modal

Share this link via

Or copy link