Loading…
Wednesday July 23, 2025 10:30am - 11:45am PDT
Session: First-order Methods for Riemannian Optimization
Chair: David Gutman
Cluster: Optimization on Manifolds

Talk 1: Tangent Subspace Descent via Discontinuous Subspace Selections on Fixed-Rank Manifolds
Speaker: David Gutman
Abstract: The tangent subspace descent method (TSD) extends the coordinate descent algorithm to manifold domains. The key insight underlying TSD is to draw an analogy between coordinate blocks in Euclidean space and tangent subspaces of a manifold. The core principle behind ensuring convergence of TSD for smooth functions is the appropriate choice of subspace at each iteration. Previously, it was shown that it is always possible to appropriately pick such subspaces on the broad class of manifolds known as naturally reductive homogeneous spaces. In this talk, we provide the first instances of TSD for manifolds outside of this class. The main idea underlying these new instances is the use of discontinuous subspace selections. As a result of our developments we derive new and efficient methods for large-scale optimization on the fixed-rank and fixed-rank positive semidefinite matrix manifolds.

Talk 2: Retraction-Free Decentralized Non-convex Optimization with Orthogonal Constraints
Speaker: Shahin Shahrampour
Abstract: In this work, we investigate decentralized non-convex optimization with orthogonal constraints. Conventional algorithms for this setting require either manifold retractions or other types of projection to ensure feasibility, both of which involve costly linear algebra operations (e.g., SVD or matrix inversion). On the other hand, infeasible methods are able to provide similar performance with higher computational efficiency. Inspired by this, we propose the first decentralized version of the retraction-free landing algorithm, called Decentralized Retraction-Free Gradient Tracking (DRFGT). We theoretically prove that DRFGT enjoys the ergodic convergence rate of O(1/K), matching the convergence rate of centralized, retraction-based methods. We further establish that under a local Riemannian PŁ condition, DRFGT achieves the much faster linear convergence rate. Numerical experiments demonstrate that DRFGT performs on par with the state-of-the-art retraction-based methods with substantially reduced computational overhead.

Talk 3: Adaptive Low Rank Representation in Reinforcement Learning
Speaker: Chenliang Li
Abstract: In reinforcement learning (RL), there is a trade-off between asymptotic bias (performance gap between the policy identified by RL and the actual optimal policy) and over-fitting (additional suboptimality due to limited data or other source of noise). In this paper, we study these two sources of error in RL with noisy environment dynamics. Our theoretical analysis demonstrates that while a low-rank representation of the value and policy functions may increase asymptotic bias, it reduces the risk of over-fitting. Further, we propose a practical algorithm, named Adaptive Low Rank (ALR) Representation for Reinforcement Learning, which adaptively tunes the rank of the model to better suit the reinforcement learning environment in terms of balancing the asymptotic bias and overfitting. We validate the efficiency of the proposed solution through extensive experiments such as the standard MuJoCo task. Our results show that the algorithm significantly outperforms baseline reinforcement learning solvers, such as Soft Actor Critic (SAC), particularly in noisy environments with limited observations.

Speakers
DG

David Gutman

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
SS

Shahin Shahrampour

Name: Dr. Slothington "Slow Convergence" McNapfaceTitle: Distinguished Professor of Continuous Optimization & Energy MinimizationAffiliation: The Lush Canopy Institute of Sluggish AlgorithmsBio:Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
CL

Chenliang Li

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
Wednesday July 23, 2025 10:30am - 11:45am PDT
Taper Hall (THH) 110 3501 Trousdale Pkwy, 110, Los Angeles, CA 90089

Attendees (1)


Log in to save this to your schedule, view media, leave feedback and see who's attending!

Share Modal

Share this link via

Or copy link