Loading…
Tuesday July 22, 2025 1:15pm - 2:30pm PDT
Session: First-order methods for nonsmooth and constrained optimization problems
Chair: Digvijay Boob
Cluster: Nonlinear Optimization

Talk 1: Linearly Convergent Algorithms for Nonsmooth Problems with Unknown Smooth Pieces
Speaker: Zhe Zhang
Abstract: Non-smoothness is a major bottleneck to efficient optimization. In the absence of smoothness, the theoretical convergence rates drop from linear to sublinear for convex programs, and become orders of magnitude worse for nonconvex programs. Moreover, this huge is known to be unimprovable in general. We focus on mild, structured non-smooth problems: piecewise smooth (PWS) functions whose domain can be partitioned into subsets such that restricted to each subset the function is smooth. PWS functions cover ReLU functions, L1-penalties, and min-max saddle point problems as special cases. In particular, we study convex PWS functions for which we present globally linear convergent methods under the quadratic growth condition; as a corollary, we improve the iteration complexity for solving weakly convex PWS problems by orders of magnitude. Importantly, our method does not require any knowledge about individual smooth pieces, and is thus applicable even to general non-smooth programs exhibiting local PWS structure.

Talk 2: Primal-Dual Algorithm with Last iterate Convergence Guarantees for Stochastic Convex Optimization Problems
Speaker: Mohammad Khalafi
Abstract: We provide the first method that obtains the best-known convergence rate guarantees on the last iterate for stochastic composite nonsmooth convex function-constrained optimization problems. Our novel and easy-to-implement algorithm is based on the augmented Lagrangian technique and uses a new linearized approximation of constraint functions, leading to its name, the Augmented Constraint Extrapolation (Aug-ConEx) method. We show that Aug-ConEx achieves convergence rate in the nonsmooth stochastic setting without any strong convexity assumption and for the same problem with strongly convex objective function. While optimal for nonsmooth and stochastic problems, the Aug-ConEx method also accelerates convergence in terms of Lipschitz smoothness constants to and in the aforementioned cases, respectively. To our best knowledge, this is the first method to obtain such differentiated convergence rate guarantees on the last iterate for a composite nonsmooth stochastic setting without additional factors. We validate the efficiency of our algorithm by comparing it with a state-of-the-art algorithm through numerical experiments.

Talk 3: An Optimal Method for Minimizing Heterogeneously Smooth and Convex Compositions
Speaker: Aaron Zoll
Abstract: This talk will discuss a universal, optimal algorithm for convex minimization problems of the composite form f(x) + h(g_1(x), ..., g_m(x)). We allow each component function g_i(x) to independently range from being nonsmooth Lipschitz to smooth and from convex to strongly convex, described by notions of Holder continuous gradients and uniform convexity. We note that although the objective is built from a heterogeneous combination of components, it does not necessarily possess any smoothness, Lipschitzness, or any favorable structural properties. Regardless, our proposed sliding accelerated gradient method converges at least as fast as the optimal rate guarantees in terms of oracle access to (sub)gradients of each g_i seperately. Furthermore, given access to an estimate of the initial distance to optimal, we provide a “mostly parameter-free” variant. As a key application, fixing h as a nonpositive indicator function, this model readily captures functionally constrained minimization f(x) subject to g_i(x) \leq 0. Our algorithm and analysis are directly inspired by the Q-analysis technique developed for such smooth constrained minimization by Zhang and Lan. Our theory recovers their accelerated guarantees and extends them to benefit from heterogeneously smooth and convex constraints.

Speakers
DB

Digvijay Boob

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
ZZ

Zhe Zhang

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
MK

Mohammad Khalafi

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
AZ

Aaron Zoll

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
Tuesday July 22, 2025 1:15pm - 2:30pm PDT
Taper Hall (THH) 118 3501 Trousdale Pkwy, 118, Los Angeles, CA 90089

Log in to save this to your schedule, view media, leave feedback and see who's attending!

Share Modal

Share this link via

Or copy link