Loading…
Tuesday July 22, 2025 10:30am - 11:45am PDT
Session: Random subspace algorithms in optimization
Chair: Pierre-Louis Poirion
Cluster: Optimization For Data Science

Talk 1: Zeroth-order Random Subspace AAlgorithm for Non-smooth Convex Optimization
Speaker: Akiko Takeda
Abstract: Zeroth-order optimization, which does not use derivative information, is one of the significant research areas in the field of mathematical optimization and machine learning. Although various studies have explored zeroth-order algorithms, one of the theoretical limitations is that oracle complexity depends on the dimension, i.e., on the number of variables, of the optimization problem. In this talk, we propose a zeroth-order random subspace algorithm by combining a gradient-free algorithm (specifically, Gaussian randomized smoothing with central differences) with random projection to reduce the dependency of the dimension in oracle complexity. The algorithm has a local convergence rate independent of the original dimension under some local assumptions.

Talk 2: Random matrix to solve large linear system
Speaker: Jiaming Yang
Abstract: Large matrices arise in real-world applications in the areas of machine learning, data analysis and optimization: from the representation of massive datasets with high dimensional features, to the first and second-order derivatives of an objective function that needs to be optimized. How can we capture the intrinsic patterns and physics efficiently and accurately? As an interdisciplinary research area that roots in theoretical computer science (TCS), random matrix theory (RMT), and recently thrives in scientific computing and machine learning, Randomized Numerical Linear Algebra (RNLA) offers a promising avenue to alleviate computational challenges by introducing randomness to large-scale problems, with the guarantee of creating efficient approximate solutions that retain high accuracy. Specifically, randomized subspace methods are one of the most popular methods that are well established in theory but only start flourishing in the area of optimization recently. In my recent work [Derezinski,Yang, STOC 2024] and [Derezinski, Musco, Yang, SODA 2025], we focus on the problem of solving a linear system and develope different types of stochastic optimization algorithms. Our algorithms provably achieve better time complexity results, and are linear in the input matrix size when we assume that it has a flat tail. As one of our key techniques, we construct a low- rank Nystrom approximation with sparse random sketching, resulting in an easy-to-construct preconditioner with the effective guarantee from the randomized subspace theory.

Talk 3: Random Subspace Homogenized Trust Region Method
Speaker: Pierre-Louis Poirion
Abstract: We proposes the Random Subspace Homogenized Trust Region (RSHTR) method with the best theoretical guarantees among random subspace algorithms for nonconvex optimization. Furthermore, under rank-deficient conditions, RSHTR converge to a second-order stationary point quadratically. Experiments on real-world datasets verify the benefits of RSHTR.

Speakers
AT

Akiko Takeda

University of Tokyo/RIKEN
Akiko Takeda received the Doctor of Science degree in information science from the Tokyo Institute of Technology, Japan, in 2001. She is currently a professor in the Department of Mathematical Informatics, The University of Tokyo, and the team leader of Continuous Optimization Team... Read More →
avatar for Jiaming Yang

Jiaming Yang

PhD Student, University of Michigan
PP

Pierre-Louis Poirion

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
Tuesday July 22, 2025 10:30am - 11:45am PDT
Taper Hall (THH) 212 3501 Trousdale Pkwy, 212, Los Angeles, CA 90089

Attendees (1)


Log in to save this to your schedule, view media, leave feedback and see who's attending!

Share Modal

Share this link via

Or copy link