Session: optimization with variational inequality constraints
Chair: Jin Zhang
Cluster: Fixed Points and Variational Inequalities
Talk 1: A Smoothing Implicit Gradient Algorithm for Optimization with Parametric Variational Inequality Constraints
Speaker: Jin Zhang
Abstract: We propose a smoothing implicit gradient algorithm for optimization with Parametric Variational Inequality (PVI) constraints on a moving polyhedron. We approximate the nonsmooth PVI constraints by smoothing constraints and establish relation between the smoothing problem and the original problem regarding their global solutions and first-order stationary points. Moreover, we introduce a penalty function and auxiliary variables to solve the smoothing problem by the implicit gradient algorithm with an updating scheme of smoothing parameter. We prove that the algorithm converges to a stationary point of the original problem. Numerical experiments are conducted to validate the efficiency of the proposed algorithm.
Talk 2: On resolution of large-scale optimization
Speaker: Xue Xie
Abstract: In this talk, I will discuss the resolution of optimal transport and general nonconvex optimization with expected-valued objective functions in large-scale. For the OT problem, we introduced the random block coordinate descent (RBCD) methods to directly solve the linear programming (LP) problem motivated by optimal transport (OT). Our approach restricts the potentially large-scale LP to small LP subproblems constructed via randomly chosen working sets. We equip the vanilla version of RBCD with almost sure convergence and a linear convergence rate. To further improve the efficiency, we explore the special structure of constraints in OT and refine the random working set selection. Preliminary numerical experiments demonstrate that the accelerated RBCD compares well with other solvers and offers the advantage of saving memory. The second problem is complicated by nonconvex expected-valued objective functions. Such a formulation can capture deep learning, policy optimization, autoencoder training, etc. It is known in literature that the sample complexity of stochastic first-order methods depend linearly on the problem dimension, which is undesirable for solving large-scale problems. To address this issue, we propose dimension insensitive stochastic first-order methods. Our algorithm allow for non-Euclidean and nonsmooth distance functions as the proximal terms when taking the stochastic gradient step. State-of-art sample complexity guarantees in terms of the dimension are shown.
Talk 3: Overcoming Lower-Level Constraints in Bilevel Optimization: A Novel Approach with Regularized Gap Functions
Speaker: Wei Yao
Abstract: Constrained bilevel optimization tackles nested structures present in constrained learning tasks like constrained meta-learning, adversarial learning, and distributed bilevel optimization. However, existing bilevel optimization methods mostly are typically restricted to specific constraint settings, such as linear lower-level constraints. In this work, we overcome this limitation and develop a new single-loop, Hessian-free constrained bilevel algorithm capable of handling more general lower-level constraints. We achieve this by employing a doubly regularized gap function tailored to the constrained lower-level problem, transforming constrained bilevel optimization into an equivalent single-level optimization problem with a single smooth constraint. We rigorously establish the non-asymptotic convergence analysis of the proposed algorithm under the convexity of lower-level problem, avoiding the need for strong convexity assumptions on the lower-level objective or coupling convexity assumptions on lower-level constraints found in existing literature. Additionally, the generality of our method allows for its extension to bilevel optimization with minimax lower-level problem. We evaluate the effectiveness and efficiency of our algorithm on various synthetic problems, typical hyperparameter learning tasks, and generative adversarial network.