Loading…
Monday July 21, 2025 1:15pm - 2:30pm PDT
Session: Nonsmooth stochastic optimization and variational inequalities
Chair: Wei Bian
Cluster: Fixed Points and Variational Inequalities

Talk 1: Dynamic Stochastic Approximation Jacobi-Type ADMM Method for Two-stage Stochastic Generalized Nash Equilibrium Problems
Speaker: Hailin Sun
Abstract: This paper studies a specific class of two-stage stochastic generalized Nash equilibrium problems (SGNEPs), where each player engages in a two-stage sequential decision-making process in a random environment: first, they make a decision in the current stage and compete with one another, followed by making a decision in the future stage. This type of two-stage SGNEPs is widely found in fields such as production and manufacturing, transportation logistics, and portfolio management. From the perspective of solving the problem, the main difference between two-stage SGNEPs and single-stage SGNEPs is the need to handle the optimal value function of the second-stage problem, which does not have an explicit expression. At present, there is no effective algorithm to address these challenges. To overcome this difficulty, an accelerated primal-dual method (APDM) is proposed in the paper to obtain an approximate $\epsilon$-subgradient of the second-stage optimal value function, achieving a convergence rate of $O\left(\frac{1}{\sqrt{N}}\right)$. Using this approximate $\epsilon$-subgradient along with a variance reduction technique, a dynamic stochastic approximation Jacobi-type Alternating Direction Method of Multipliers (DSA-JADMM) method is proposed and applied to solve two-stage SGNEPs. This algorithm represents an inexact stochastic version of the Jacobi-type ADMM, as it computes an approximate $\epsilon$-subgradient for the second stage randomly at each iteration using APDM. It is also demonstrated that the algorithm can converge to a weak $\epsilon$-variational equilibrium point of two-stage SGNEPs with a convergence rate of $O\left(\frac{1}{\sqrt{K}}\right)$, which is a special type of Nash equilibrium point. To validate the effectiveness of the DSA-JADMM method, preliminary numerical experiments are conducted. These experiments demonstrate the advantages and superior performance of the proposed method.

Talk 2: Nonsmooth convex-concave saddle point problems with cardinality penalties
Speaker: Wei Bian
Abstract: we focus on a class of convexly constrained nonsmooth convex-concave saddle point problems with cardinality penalties. Although such nonsmooth nonconvex-nonconcave and discontinuous min-max problems may not have a saddle point, we show that they have a local saddle point and a global minimax point, and some local saddle points have the lower bound properties. We define a class of strong local saddle points based on the lower bound properties for stability of variable selection. Moreover we give a framework to construct continuous relaxations of the discontinuous min-max problems based on convolution, such that they have the same saddle points with the original problem. We also establish the relations between the continuous relaxation problems and the original problems regarding local saddle points, global minimax points, local minimax points and stationary points.

Talk 3: AN AUGMENTED LAGRANGIAN METHOD FOR TRAINING RECURRENT NEURAL NETWORKS
Speaker: Chao Zhang
Abstract: Recurrent Neural Networks (RNNs) are widely used to model sequential data in a wide range of areas, such as natural language processing, speech recognition, machine translation, and time series analysis. In this paper, we model the training process of RNNs with the ReLU activation function as a constrained optimization problem with a smooth nonconvex objective function and piecewise smooth nonconvex constraints. We prove that any feasible point of the optimization problem satisfies the no nonzero abnormal multiplier constraint qualification (NNAMCQ), and any local minimizer is a Karush-Kuhn-Tucker (KKT) point of the problem. Moreover, we propose an augmented Lagrangian method (ALM) and design an efficient block coordinate descent (BCD) method to solve the subproblems of the ALM. The update of each block of the BCD method has a closed-form solution. The stop criterion for the inner loop is easy to check and can be stopped in finite steps. Moreover, we show that the BCD method can generate a directional stationary point of the subproblem. Furthermore, we establish the global convergence of the ALM to a KKT point of the constrained optimization problem. Compared with the state-of-the-art algorithms, numerical results demonstrate the efficiency and effectiveness of the ALM for training RNNs.

Speakers
HS

Hailin Sun

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
WB

Wei Bian

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
CZ

Chao Zhang

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
Monday July 21, 2025 1:15pm - 2:30pm PDT
Taper Hall (THH) 110 3501 Trousdale Pkwy, 110, Los Angeles, CA 90089

Attendees (1)


Log in to save this to your schedule, view media, leave feedback and see who's attending!

Share Modal

Share this link via

Or copy link