Loading…
Monday July 21, 2025 4:15pm - 5:30pm PDT
Session: Fast Algorithms for Data Science
Chair: Shota Takahashi
Cluster: Optimization For Data Science

Talk 1: Sparsity Constrained and Sparsity Regularized Piecewise Linear Minimization: Optimality Conditions and Algorithms
Speaker: Mustafa Celebi Pinar
Abstract: Motivated by some applications such as sparsity regularized boosting where the so called boosting algorithms maximize the soft margin of produced linear combination of base hypotheses (given an oracle for producing base hypotheses) while dealing with a training set of plus-minus labeled examples, we study optimality conditions for the resulting piecewise-linear and sparsity imposed minimization problems. Different concepts of optimality are defined and developed. Algorithms are proposed and tested. Numerical results will be reported.

Talk 2: Adaptive first-order method for nonconvex optimization derived from vanishing damping continuous-time dynamics
Speaker: Kansei Ushiyama
Abstract: We propose a new first-order algorithm for nonconvex functions with Lipschitz continuous gradients and Hessian matrices. Existing first-order methods use momentum to achieve the lowest known computational complexity for finding a stationary point. The limitation of these methods is that they either require the knowledge of parameters, including Lipschitz constants, or rely on the restart strategy that resets the momentum and can slow down the algorithm. Our method has the lowest known complexity, does not require the knowledge of parameters, and uses a strategy other than restart that does not reset the momentum. This algorithm is derived from a continuous-time algorithm that can be interpreted as a dynamics with vanishing damping. We show numerically that our method works efficiently for some problems.

Talk 3: Accelerated Convergence of Frank–Wolfe Algorithms with Adaptive Bregman Step-Size Strategy
Speaker: Shota Takahashi
Abstract: We propose a Frank–Wolfe (FW) algorithm with an adaptive Bregman step-size strategy for smooth adaptable (weakly-) convex functions. This means that the gradient of the objective function is not necessarily Lipschitz continuous and we only require the smooth adaptable property. Compared to existing FW algorithms, our assumptions are thus less restrictive. We establish convergence guarantees in various settings, such as sublinear to linear convergence rates depending on the assumptions. Assuming that the objective function is weakly convex, we also provide both local sublinear and local linear convergence in terms of the primal gap under the (local) quadratic growth condition. We also propose a variant of the away-step FW algorithm using Bregman distances and establish its global linear convergence for convex optimization and its local linear convergence for nonconvex optimization under the (local) quadratic growth condition over polytopes. Numerical experiments demonstrate that our proposed FW algorithms outperform existing methods. This talk is based on joint works with Akiko Takeda and Sebastian Pokutta.

Speakers
MC

Mustafa Celebi Pinar

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
KU

Kansei Ushiyama

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
ST

Shota Takahashi

Assistant Professor, The University of Tokyo
Monday July 21, 2025 4:15pm - 5:30pm PDT
Taper Hall (THH) 106 3501 Trousdale Pkwy, 106, Los Angeles, CA 90089

Log in to save this to your schedule, view media, leave feedback and see who's attending!

Share Modal

Share this link via

Or copy link