Session: Fixed Point and Min-Max Theory in Data Science
Chair: Ahmet Alacaoglu
Cluster: Fixed Points and Variational Inequalities
Talk 1: Fixed-point error bounds for mean-payoff Markov decision processes
Speaker: Roberto Cominetti
Abstract: We discuss the use of optimal transport techniques to derive finite-time error bounds for reinforcement learning in mean-payoff Markov decision processes. The results are obtained as a special case of stochastic Krasnoselski—Mann fixed point iterations for nonexpansive maps. We present sufficient conditions on the stochastic noise and stepsizes that guarantee almost sure convergence of the iterates towards a fixed point, as well as non-asymptotic error bounds and convergence rates. Our main results concern the case of a martingale difference noise with variances that can possibly grow unbounded. We also analyze the case of uniformly bounded variances, and how they apply for Stochastic Gradient Descent in convex optimization.
Talk 2: Fast Last-Iterate Convergence of Learning in Games Requires Forgetful Algorithms
Speaker: Haipeng Luo
Abstract: Self-play via online learning is one of the premier ways to solve large-scale two-player zero-sum games, both in theory and practice. Particularly popular algorithms include optimistic multiplicative weights update (OMWU) and optimistic gradient-descent-ascent (OGDA). While both algorithms enjoy O(1/T) ergodic convergence to Nash equilibrium in two-player zero-sum games, OMWU offers several advantages including logarithmic dependence on the size of the payoff matrix and fast convergence to coarse correlated equilibria even in general-sum games. However, in terms of last-iterate convergence in two-player zero-sum games, an increasingly popular topic in this area, OGDA guarantees that the duality gap shrinks at a rate of (1/sqrt{T}), while the best existing last-iterate convergence for OMWU depends on some game-dependent constant that could be arbitrarily large. This begs the question: is this potentially slow last-iterate convergence an inherent disadvantage of OMWU, or is the current analysis too loose? Somewhat surprisingly, we show that the former is true. More generally, we prove that a broad class of algorithms that do not forget the past quickly all suffer the same issue: for any arbitrarily small delta > 0, there exists a 2 by 2 matrix game such that the algorithm admits a constant duality gap even after 1/delta rounds. This class of algorithms includes OMWU and other standard optimistic follow-the-regularized-leader algorithms.
Talk 3: A first-order algorithm for decentralised min-max problems
Speaker: Matthew Tam
Abstract: In this work, we consider a connected network of finitely many agents working cooperatively to solve a min-max problem with convex-concave structure. We propose a decentralised first-order algorithm which can be viewed as combining features of two algorithms: PG-EXTRA for decentralised minimisation problems and the forward reflected backward method for (non-distributed) min-max problems. In each iteration of our algorithm, each agent computes the gradient of the smooth component of its local objective function as well as the proximal operator of its nonsmooth component, following by a round of communication with its neighbours.