Loading…
Thursday July 24, 2025 1:15pm - 2:30pm PDT
Session: Recent Advances in Stochastic First-Order Methods
Chair: Tianjiao Li
Cluster: Optimization Under Uncertainty and Data-driven Optimization

Talk 1: Auto-conditioned gradient methods for nonconvex stochastic optimization
Speaker: Guanghui Lan
Abstract: We present a novel class of projected gradient methods, termed auto-conditioned projected gradient (AC-PG) method, for minimizing a smooth but not necessarily convex function over a compact convex set. AC-PG does not require the input of the Lipschitz constant of the objective function or any line search procedure, but can still achieve the best-known iteration complexity associated with the projected gradient method for finding an approximate stationary point of the problem. The key idea is to estimate the Lipschitz constant using first-order information gathered from the previous iterations, and to show that the error caused by underestimating the Lipschitz constant can be properly controlled. We then generalize AC-PG for stochastic nonconvex optimization and derive the well-known sample complexity for stochastic projected gradient descent for a class of nonconvex stochastic optimization problem.

Talk 2: A Novel Catalyst Scheme for Stochastic Minimax Optimization
Speaker: Yan Li
Abstract: We present a proximal-point-based catalyst scheme for simple first-order methods applied to convex minimization and convex-concave minimax problems. In particular, for smooth and (strongly)-convex minimization problems, the proposed catalyst scheme, instantiated with a simple variant of stochastic gradient method, attains the optimal rate of convergence in terms of both deterministic and stochastic errors. For smooth and strongly-convex-strongly-concave minimax problems, the catalyst scheme attains the optimal rate of convergence for deterministic and stochastic errors up to a logarithmic factor. To the best of our knowledge, this reported convergence seems to be attained for the first time by stochastic first-order methods in the literature. We obtain this result by designing and catalyzing a novel variant of stochastic extragradient method for solving smooth and strongly-monotone variational inequality, which may be of independent interest.

Talk 3: Bridging constrained stochastic optimization and statistical learning
Speaker: Wenlong Mou
Abstract: First-order stochastic optimization algorithms and complexity-based statistical risk bounds form the foundation of modern machine learning theory. When seeking optimal parameters within a constrained set, the achievable statistical risk depends on the complexity of the constraint set rather than on the ambient dimension. However, existing first-order stochastic optimization methods often fail to adapt to this complexity, making it challenging to attain instance-dependent optimal risk. In this talk, I will discuss recent advances in constrained stochastic optimization. We examine the complexity of stochastic gradient methods and their variance-reduced counterparts, exploring how these algorithms respond to the geometric structures of constraint sets. I will also present new instance-dependent optimality guarantees for a class of variance-reduced algorithms, highlighting how these results provide concrete guidance on the choice of parameters.

Speakers
avatar for Tianjiao Li

Tianjiao Li

PhD student, Georgia Institute of Technology
Name: Tianjiao LiTitle: PhD studentAffiliation: Georgia Institute of TechnologyBio:Tianjiao Li is a fifth-year PhD candidate in the School of Industrial and Systems Engineering at Georgia Institute of Technology, advised by Prof. George Lan and Prof. Ashwin Pananjady. His research... Read More →
GL

Guanghui Lan

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
YL

Yan Li

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
WM

Wenlong Mou

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
Thursday July 24, 2025 1:15pm - 2:30pm PDT
Taper Hall (THH) 208 3501 Trousdale Pkwy, 208, Los Angeles, CA 90089

Log in to save this to your schedule, view media, leave feedback and see who's attending!

Share Modal

Share this link via

Or copy link