Session: Recent Advances in Stochastic Optimization: Complexity, Adaptivity, and Nonsmooth Extensions (I)
Chair: Sen Na; Zhaosong Lu
Cluster: Nonlinear Optimization
Talk 1: Adaptive Optimization with Highly Corrupted Inputs: A Unified Framework for High-Probability Iteration Complexity Analysis
Speaker: Miaolan Xie
Abstract: We consider an unconstrained continuous optimization problem in which gradient estimates may be arbitrarily corrupted in each iteration with a probability greater than $\frac 1 2$. Additionally, function value estimates may be noisy or adversarially corrupted throughout the algorithm’s execution. This framework is applicable to many real-world problems and is particularly relevant to stochastic and derivative-free optimization settings. We introduce an algorithmic and analytical framework that provides high probability bounds on iteration complexity for this highly corrupted setting. The analysis offers a unified approach, accommodating noisy or corrupted inputs and encompassing methods such as line search and trust region.
Talk 2: Adaptive Stochastic Algorithms for Nonconvex Constrained Optimization
Speaker: Baoyu Zhou
Abstract: We propose a sequential quadratic programming method for solving general smooth nonlinear stochastic optimization problems subject to expectation equality constraints. We consider the setting where the objective and constraint function values, as well as their derivatives, are not directly available. Under reasonable assumptions, the algorithm generates a sequence of iterates whose first-order stationary measure diminishes in expectation. In addition, we identify the iteration and sample complexity for obtaining a first-order ε-stationary iterate in expectation. The results of numerical experiments demonstrate the efficiency and efficacy of our proposed algorithm compared to a penalty method and an augmented Lagrangian method.
Talk 3: Variance-Reduced First-Order Methods for Constrained Stochastic Optimization
Speaker: Zhaosong Lu
Abstract: We study a class of deterministically constrained stochastic optimization problems. Existing methods typically aim to find an approximate stochastic stationary point, where the expected violations of both the constraints and first-order stationarity are nearly satisfied. However, such approximate solutions can lead to significant constraint violations. To address this issue, we propose single-loop variance-reduced stochastic first-order methods. In our approach, the stochastic gradient of the stochastic component is computed using either a truncated recursive scheme or a truncated Polyak momentum scheme for variance reduction, while the gradient of the deterministic component is computed exactly. Under suitable assumptions, our proposed methods not only achieve new sample and first-order operation complexity but also produce stronger approximate stochastic stationary points that more reliably satisfy the constraints compared to existing methods.