Session: Recent Advances in Stochastic Optimization: Complexity, Adaptivity, and Nonsmooth Extensions (I)
Chair: Sen Na & Zhaosong Lu
Cluster: Nonlinear Optimization
Talk 1: Adaptive Optimization with Highly Corrupted Inputs: A Unified Framework for High-Probability Iteration Complexity Analysis
Speaker: Miaolan Xie
Abstract: We consider an unconstrained continuous optimization problem in which gradient estimates may be arbitrarily corrupted in each iteration with a probability greater than $\frac 1 2$. Additionally, function value estimates may be noisy or adversarially corrupted throughout the algorithm’s execution. This framework is applicable to many real-world problems and is particularly relevant to stochastic and derivative-free optimization settings. We introduce an algorithmic and analytical framework that provides high probability bounds on iteration complexity for this highly corrupted setting. The analysis offers a unified approach, accommodating noisy or corrupted inputs and encompassing methods such as line search and trust region.
Talk 2: Adaptive Stochastic Algorithms for Nonconvex Constrained Optimization
Speaker: Baoyu Zhou
Abstract: In this talk, we will discuss some recent works on the design, analysis, and implementation of a class of efficient algorithms for solving stochastic optimization problems with deterministic nonlinear nonconvex constraints. Those optimization problems arise in a plethora of science and engineering applications including physics-informed learning, PDE-constrained optimization, machine learning fairness, and optimal power flow. We are especially interested in the case where the problem's feasible region is difficult to detect and projection-type methods are intractable. The theoretical results and numerical performance demonstrate the efficiency and efficacy of our proposed algorithms.
Talk 3: Variance-Reduced First-Order Methods for Constrained Stochastic Optimization
Speaker: Zhaosong Lu
Abstract: We study a class of deterministically constrained stochastic optimization problems. Existing methods typically aim to find an approximate stochastic stationary point, where the expected violations of both the constraints and first-order stationarity are nearly satisfied. However, such approximate solutions can lead to significant constraint violations. To address this issue, we propose single-loop variance-reduced stochastic first-order methods. In our approach, the stochastic gradient of the stochastic component is computed using either a truncated recursive scheme or a truncated Polyak momentum scheme for variance reduction, while the gradient of the deterministic component is computed exactly. Under suitable assumptions, our proposed methods not only achieve new sample and first-order operation complexity but also produce stronger approximate stochastic stationary points that more reliably satisfy the constraints compared to existing methods.