Loading…
Monday July 21, 2025 10:30am - 11:45am PDT
Session: Advances in large-scale nonlinear optimization for data science
Chair: Jiawei Zhang
Cluster: Nonlinear Optimization

Talk 1: On Squared-Variable Formulations for Nonlinear Semidefinite Programming
Speaker: Lijun Ding
Abstract: We study squared-variable formulations for nonlinear semidefinite programming. We show an equivalence result of second-order stationary points of the nonsymmetric-squared-variable formulations and the nonlinear semidefinite programs. We also show that such an equivalence fails for the local minimizers and second-order stationary points of the symmetric-squared-variable formulations and the nonlinear semidefinite programs, correcting a false understanding in the literature and providing sufficient conditions for such a correspondence to hold.

Talk 2: High-probability complexity guarantees for nonconvex minimax problems
Speaker: Yasa Syed
Abstract: Stochastic smooth nonconvex minimax problems are prevalent in machine learning, e.g., GAN training, fair classification, and distributionally robust learning. Stochastic gradient descent ascent (GDA)-type methods are popular in practice due to their simplicity and single-loop nature. However, there is a significant gap between the theory and practice regarding high-probability complexity guarantees for these methods on stochastic nonconvex minimax problems. Existing high-probability bounds for GDA-type single-loop methods only apply to convex/concave minimax problems and to particular non-monotone variational inequality problems under some restrictive assumptions. In this work, we address this gap by providing the first high-probability complexity guarantees for nonconvex/PL minimax problems corresponding to a smooth function that satisfies the PL-condition in the dual variable. Specifically, we show that when the stochastic gradients are light-tailed, the smoothed alternating GDA method can compute an $\varepsilon$-stationary point within $\mathcal{O}(\frac{\ell \kappa^2 \delta^2}{\varepsilon^4} + \frac{\kappa}{\varepsilon^2}(\ell+\delta^2\log({1}/{\bq})))$ stochastic gradient calls with probability at least $1-\bq$ for any $\bq\in(0,1)$, where $\mu$ is the PL constant, $\ell$ is the Lipschitz constant of the gradient, $\kappa=\ell/\mu$ is the condition number, and $\delta^2$ denotes a bound on the variance of stochastic gradients. We also present numerical results on a nonconvex/PL problem with synthetic data and on distributionally robust optimization problems with real data, illustrating our theoretical findings.

Talk 3: Sparse Solutions to Linear Systems via Polyak’s Stepsize
Speaker: Yura Malitsky
Abstract: This talk explores the implicit bias of entropic mirror descent in finding sparse solutions to linear systems, emphasizing the importance of appropriate initialization. We present an adaptive approach to improving the algorithm, using Polyak's stepsizes as a key tool.

Speakers
JZ

Jiawei Zhang

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
LD

Lijun Ding

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
YS

Yasa Syed

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
YM

Yura Malitsky

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
Monday July 21, 2025 10:30am - 11:45am PDT
Taper Hall (THH) 106 3501 Trousdale Pkwy, 106, Los Angeles, CA 90089

Attendees (4)


Log in to save this to your schedule, view media, leave feedback and see who's attending!

Share Modal

Share this link via

Or copy link