Loading…
Session: Advances in min-max optimization algorithms for machine learning
Chair: Ahmet Alacaoglu; Yura Malitsky; Stephen J. Wright
Cluster: Optimization For Data Science

Talk 1: How to make the gradient descent-ascent converge to local minimax optima
Speaker: Donghwan Kim
Abstract: Can we effectively train a generative adversarial network (GAN) (or equivalently, optimize a minimax problem), similarly to how we successfully train a classification neural network (or equivalently, minimize a function) using gradient methods? Currently, the answer is 'No'. The remarkable success of gradient descent in minimization is supported by theoretical results; under mild conditions, gradient descent converges to a local minimizer, and almost surely avoids strict saddle points. However, comparable theoretical support for minimax optimization is currently lacking. This talk will discuss recent progress in addressing this gap using dynamical systems theory. Specifically, this talk will present new variants of gradient descent-ascent that, under mild conditions, converge to local minimax optima, where the existing gradient descent-ascent methods fail to do so.

Talk 2: Parameter-free second-order methods for min-max optimization
Speaker: Ali Kavis
Abstract: In this talk, I will talk about an adaptive, line-search-free second-order methods with optimal rate of convergence for solving convex-concave min-max problems. By means of an adaptive step size, our algorithms feature a simple update rule that requires solving only one linear system per iteration, eliminating the need for line-search or backtracking mechanisms. Specifically, we base our algorithms on the optimistic method and appropriately combine it with second-order information. Moreover, distinct from common adaptive schemes, we define the step size recursively as a function of the gradient norm and the prediction error in the optimistic update. We first analyze a variant where the step size requires knowledge of the Lipschitz constant of the Hessian. Under the additional assumption of Lipschitz continuous gradients, we further design a parameter-free version by tracking the Hessian Lipschitz constant locally and ensuring the iterates remain bounded. We also evaluate the practical performance of our algorithm by comparing it to existing second-order algorithms for minimax optimization. Inspired by the adaptive design of the step size, we propose a heuristic initialization rule that performs competitively across different problems and scenarios and eliminates the need to fine tune the step size.

Talk 3: On Efficient Solvers for Fixed-Point Equations: Classical Results and New Twists
Speaker: Jelena Diakonikolas
Abstract: Fixed-point operator equations—where one seeks solutions to T(x) = x for an operator T mapping a vector space to itself—form a fundamental class of problems with broad applications in optimization theory, game theory, economics, and, more recently, reinforcement learning. In this talk, I will begin by reviewing classical algorithmic results for solving such equations under oracle access to the operator T. I will then highlight key gaps in the literature, particularly in settings where T may be expansive—though in a controlled sense—or where access to T is limited to stochastic queries. Finally, I will present recent results that address some of these challenges and conclude with open questions and potential directions for future work.

Speakers
avatar for Stephen J. Wright

Stephen J. Wright

UW-Madison
Stephen J. Wright is the George B. Dantzig Professor of Computer Sciences, Sheldon Lubar Chair of Computer Sciences, and Hilldale Professor at the University of Wisconsin-Madison. He also serves as Chair of the Computer Sciences Department. His research is in computational optimization... Read More →
DK

Donghwan Kim

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
AK

Ali Kavis

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
AA

Ahmet Alacaoglu

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
YM

Yura Malitsky

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
Thursday July 24, 2025 10:30am - 11:45am PDT
Joseph Medicine Crow Center for International and Public Affairs (DMC) 157 3518 Trousdale Pkwy, 157, Los Angeles, CA 90089

Attendees (4)


Log in to save this to your schedule, view media, leave feedback and see who's attending!

Share Modal

Share this link via

Or copy link