Session: Methods for Large-Scale Nonlinear Optimization III
Chair: Baoyu Zhou
Cluster: Nonlinear Optimization
Talk 1: A Two Stepsize SQP Method for Nonlinear Equality Constrained Stochastic Optimization
Speaker: Michael O'Neill
Abstract: We develop a Sequential Quadratic Optimization (SQP) algorithm for minimizing a stochastic objective function subject to deterministic equality constraints. The method utilizes two different stepsizes, one which exclusively scales the component of the step corrupted by the variance of the stochastic gradient estimates and a second which scales the entire step. We prove that this stepsize splitting scheme has a worst-case complexity result which improves over the best known result for this class of problems. In terms of approximately satisfying the constraint violation, this complexity result matches that of deterministic SQP methods, up to constant factors, while matching the known optimal rate for stochastic SQP methods to approximately minimize the norm of the gradient of the Lagrangian. We also propose and analyze multiple variants of our algorithm. One of these variants is based upon popular adaptive gradient methods for unconstrained stochastic optimization while another incorporates a safeguarded line search along the constraint violation. Preliminary numerical experiments show competitive performance against a state of the art stochastic SQP method. In addition, in these experiments, we observe an improved rate of convergence in terms of the constraint violation, as predicted by the theoretical results.
Talk 2: A Proximal-Stochastic-Gradient Method for Regularized Equality Constrained Problems
Speaker: Daniel P. Robinson
Abstract: I present an algorithm of the proximal-stochastic-gradient variety for minimizing the sum of a nonconvex loss function and a convex regularization function subject to nonlinear equality constraints. Motivation for the algorithm is provided, along with a theoretical analysis and preliminary numerical results.
Talk 3: Randomized Feasibility-Update Algorithms For Variational Inequality Problems
Speaker: Abhishek Chakraborty
Abstract: This paper considers a variational inequality (VI) problem arising from a game among multiple agents, where each agent aims to minimize its own cost function subject to its constrained set represented as the intersection of a (possibly infinite) number of convex functional level sets. A direct projection-based approach or Lagrangian-based techniques for such a problem can be computationally expensive if not impossible to implement. To deal with the problem, we consider randomized methods that avoid the projection step on the whole constraint set by employing random feasibility updates. In particular, we propose and analyze such random methods for solving VIs based on the projection method, Korpelevich method, and Popov method. We establish the almost sure convergence of the methods and, also, provide their convergence rate guarantees. We illustrate the performance of the methods in simulations for two-agent games.