Loading…
Session: Randomized optimization algorithms 1/2
Chair: Laurent Condat
Cluster: Nonlinear Optimization

Talk 1: Variance reduction for stochastic proximal algorithms
Speaker: Cheik Traoré
Abstract: In the context of finite sums minimization, variance reduction techniques are widely used to improve the performance of state-of-the-art stochastic gradient methods. Their practical impact is clear, as well as their theoretical properties. Stochastic proximal point algorithms have been studied as an alternative to stochastic gradient algorithms since they are more stable with respect to the choice of the stepsize but their variance reduced versions are not as studied as the gradient ones. In this work, we propose the first unified study of variance reduction techniques for stochastic proximal point algorithms. We introduce a generic stochastic proximal algorithm that can be specified to give the proximal version of SVRG, SAGA, and some of their variants for smooth and convex functions. We provide several convergence results for the iterates and the objective function values. In addition, under the Polyak-Łojasiewicz (PL) condition, we obtain linear convergence rates for the iterates and the function values. Our numerical experiments demonstrate the advantages of the proximal variance reduction methods over their gradient counterparts, especially about the stability with respect to the choice of the stepsize for difficult problems.

Talk 2: Taming Nonconvex Stochastic Mirror Descent with General Bregman Divergence
Speaker: Ilyas Fatkhullin
Abstract: This paper revisits the convergence of Stochastic Mirror Descent (SMD) in the contemporary nonconvex optimization setting. Existing results for batch-free nonconvex SMD restrict the choice of the distance generating function (DGF) to be differentiable with Lipschitz continuous gradients, thereby excluding important setups such as Shannon entropy. In this work, we present a new convergence analysis of nonconvex SMD supporting general DGF, that overcomes the above limitations and relies solely on the standard assumptions. Moreover, our convergence is established with respect to the Bregman Forward-Backward envelope, which is a stronger measure than the commonly used squared norm of gradient mapping. We further extend our results to guarantee high probability convergence under sub-Gaussian noise and global convergence under the generalized Bregman Proximal Polyak-{Ł}ojasiewicz condition. Additionally, we illustrate the advantages of our improved SMD theory in various nonconvex machine learning tasks by harnessing nonsmooth DGFs. Notably, in the context of nonconvex differentially private (DP) learning, our theory yields a simple algorithm with a (nearly) dimension-independent utility bound. For the problem of training linear neural networks, we develop provably convergent stochastic algorithms.

Talk 3: Adaptive Bregman-Kaczmarz: An approach to solve linear inverse problems with independent noise exactly
Speaker: Lionel Tondji
Abstract: We consider the block Bregman–Kaczmarz method for finite dimensional linear inverse problems. The block Bregman–Kaczmarz method uses blocks of the linear system and performs iterative steps with these blocks only. We assume a noise model that we call independent noise, i.e. each time the method performs a step for some block, one obtains a noisy sample of the respective part of the right-hand side which is contaminated with new noise that is independent of all previous steps of the method. One can view these noise models as making a fresh noisy measurement of the respective block each time it is used. In this framework, we are able to show that a well-chosen adaptive stepsize of the block Bregman–Kaczmarz method is able to converge to the exact solution of the linear inverse problem. The plain form of this adaptive stepsize relies on unknown quantities (like the Bregman distance to the solution), but we show a way how these quantities can be estimated purely from given data. We illustrate the finding in numerical experiments and confirm that these heuristic estimates lead to effective stepsizes.

Speakers
avatar for Laurent Condat

Laurent Condat

Senior Research Scientist, King Abdullah University of Science and Technology (KAUST)
Laurent Condat received a PhD in applied mathematics in 2006 from Grenoble Institute of Technology, Grenoble, France. After a postdoc in the Helmholtz Zentrum Muenchen, Munich, Germany, he was hired in 2008 as a permanent researcher by the French National Center for Scientific Research... Read More →
CT

Cheik Traoré

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
IF

Ilyas Fatkhullin

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
LT

Lionel Tondji

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
Tuesday July 22, 2025 4:15pm - 5:30pm PDT
Joseph Medicine Crow Center for International and Public Affairs (DMC) 158 3518 Trousdale Pkwy, 158, Los Angeles, CA 90089

Log in to save this to your schedule, view media, leave feedback and see who's attending!

Share Modal

Share this link via

Or copy link