Loading…
Session: Convergence Analysis and Rates
Chair: Tianyu Wang
Cluster: nan

Talk 1: Breaking a logarithmic barrier in the stopping time convergence rate of stochastic first-order methods.
Speaker: Tianyu Wang
Abstract: We focus on the convergence rate for stochastic optimization in terms of stopping times. Such problems are of great importance, since in most real-world cases, the time to stop the stochastic algorithm is a stopping time. To this end, we improve the state-of-the-art stopping-time convergence rate by removing a logarithmic factor. Our analysis leverages a novel framework building on a refined Lyapunov analysis and a new Gronwall-type argument.

Talk 2: Uniformly Optimal and Parameter-free First-order Methods for Convex and Function-constrained Optimization
Speaker: Zhenwei Lin
Abstract: This paper presents new first-order methods for achieving optimal oracle complexities in convex optimization with convex function constraints. Oracle complexities are measured by the number of function and gradient evaluations. To achieve this, we develop first-order methods that can utilize computational oracles for solving diagonal quadratic programs in subproblems. For problems where the optimal value $f^*$ is known, such as those in overparameterized models and feasibility problems, we propose an accelerated first-order method that incorporates a modified Polyak step size and Nesterov's momentum. Notably, our method does not require knowledge of smoothness levels, H\"{o}lder continuity parameter of the gradient, or additional line search, yet achieves the optimal oracle complexity bound of $\mathcal{O}(\varepsilon^{-2/(1+3\rho)})$ under H\"{o}lder smoothness conditions. When $f^*$ is unknown, we reformulate the problem as finding the root of the optimal value function and develop inexact fixed-point iteration and secant method to compute $f^*$. These root-finding subproblems are solved inexactly using first-order methods to a specified relative accuracy. We employ the accelerated prox-level (APL) method, which is proven to be uniformly optimal for convex optimization with simple constraints. Our analysis demonstrates that APL-based root-finding also achieves the optimal oracle complexity of $\mathcal{O}(\varepsilon^{-2/(1+3\rho)})$ for convex function-constrained optimization, without requiring knowledge of any problem-specific structures. Through experiments on various tasks, we demonstrate the advantages of our methods over existing approaches in function-constrained optimization.

Talk 3: Stochastic Halpern iteration in normed spaces and applications to reinforcement learning
Speaker: Mario Bravo
Abstract: We analyze the oracle complexity of the stochastic Halpern iteration with minibatching, where we aim to approximate fixed-points of nonexpansive operators in a normed finite-dimensional space. We show that if the underlying stochastic oracle is with uniformly bounded variance, our method exhibits an overall oracle complexity of $\tilde{O}(\varepsilon^{-5})$ to reduce the expected error residual below $\varepsilon$. Also, we establish a lower bound of $\Omega(\varepsilon^{-3})$, applicable to a broad class of algorithms, including all averaged iterations, even with minibatching.  As an application, we introduce a new model-free algorithm for weakly communicating average-reward MDPs, requiring no prior knowledge, and analyze its sample complexity. This is joint work with Juan Pablo Contreras (UDP, Chile)

Speakers
TW

Tianyu Wang

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
ZL

Zhenwei Lin

PhD candidate, Shanghai University of Finance and Economics
Name: Zhenwei LinTitle:Accelerated Bundle Level Method for Convex OptimizationAffiliation: Shanghai University of Finance and Economics and Purdue UniversityBio:This paper presents new first-order methods for achieving optimal oracle complexities in convex optimization with convex... Read More →
MB

Mario Bravo

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
Thursday July 24, 2025 4:15pm - 5:30pm PDT
Joseph Medicine Crow Center for International and Public Affairs (DMC) 256 3518 Trousdale Pkwy, 256, Los Angeles, CA 90089

Log in to save this to your schedule, view media, leave feedback and see who's attending!

Share Modal

Share this link via

Or copy link