Loading…
Monday July 21, 2025 1:15pm - 2:30pm PDT
Session: Proximal Methods and Splitting Algorithms
Chair: Mikhail Solodov
Cluster: nan

Talk 1: Asymmetric Variable Metric Proximal Point Methods, with Relations Between Relative-Errors and Summable-Errors
Speaker: Mikhail Solodov
Abstract: For the problem of finding zeroes of maximally monotone operators, we introduce a general scheme allowing both symmetric and asymmetric metrics in the proximal point algorithms. Our approach subsumes as special cases various proximal point methods in the literature. Moreover, it brings some new insights into the relationship between the hybrid proximal methods with bounded relative-error approximations and inexact proximal methods with error-summability-type conditions. In particular, introducing asymmetric metrics made it possible to cast summable errors in the framework of relative ones, asymptotically. Global convergence is established under standard assumptions. In addition, a certain generalization of Féjer-monotonicity is presented to obtain the local linear convergence rate under the calmness condition on the operator.

Talk 2: A FULL SPLITTING ALGORITHM FOR FRACTIONAL PROGRAMS 1 WITH STRUCTURED NUMERATORS AND DENOMINATORS
Speaker: Min Tao
Abstract: In this paper, we consider a class of nonconvex and nonsmooth fractional programming problems, which involve the sum of a convex, possibly nonsmooth function composed with a linear operator and a differentiable, possibly nonconvex function in the numerator and a convex, possibly nonsmooth function composed with a linear operator in the denominator. These problems have applications in various fields, including CT reconstruction and sparse signal recovery. We propose an adaptive full splitting proximal subgradient algorithm with an extrapolated step that addresses the challenge of evaluating the composition in the numerator by decoupling the linear operator from the nonsmooth component. We specifically evaluate the nonsmooth function using its proximal operator, while the linear operator is assessed through forward evaluations. Furthermore, the smooth component in the numerator is evaluated through its gradient, the nonsmooth component in the denominator is managed using its subgradient, and the linear operator in the denominator is also assessed through forward evaluations. We demonstrate subsequential convergence toward an approximate lifted stationary point and ensure global convergence under the Kurdyka-\L ojasiewicz property, all achieved {\it without relying on any full-row rank assumptions regarding the linear operators}. We provide further discussions on the tightness of the convergence results of the proposed algorithm and its related variants, and also on the reasoning behind aiming for an approximate lifted stationary point. This is exemplified by constructing a scenario illustrating that the algorithm could diverge when seeking exact solutions. Lastly, we present a practical version of the algorithm incorporating a nonmonotone line search, significantly enhancing its convergence performance. Our theoretical findings are validated through simulations involving limited-angle CT reconstruction and the robust Sharpe ratio minimization problem.

Talk 3: Inexact Proximal Point Algorithms for Zeroth-Order Global Optimization
Speaker: Minxin Zhang
Abstract: This work concerns the zeroth-order global minimization of continuous nonconvex functions with a unique global minimizer and possibly multiple local minimizers. We formulate a theoretical framework for inexact proximal point (IPP) methods for global optimization, establishing convergence guarantees under mild assumptions when either deterministic or stochastic estimates of proximal operators are used. The quadratic regularization in the proximal operator and the scaling effect of a positive parameter create a concentrated landscape of an associated Gibbs measure that is practically effective for sampling. The convergence of the expectation under the Gibbs measure is established, providing a theoretical foundation for evaluating proximal operators inexactly using sampling-based methods such as Monte Carlo (MC) integration. In addition, we propose a new approach based on tensor train (TT) approximation. This approach employs a randomized TT cross algorithm to efficiently construct a low-rank TT approximation of a discretized function using a small number of function evaluations, and we provide an error analysis for the TT-based estimation. We then propose two practical IPP algorithms, TT-IPP and MC-IPP. The TT-IPP algorithm leverages TT estimates of the proximal operators, while the MC-IPP algorithm employs MC integration to estimate the proximal operators. Both algorithms are designed to adaptively balance efficiency and accuracy in inexact evaluations of proximal operators. The effectiveness of the two algorithms is demonstrated through experiments on diverse benchmark functions and various applications. Reference: Zhang, M., Han, F., Chow, Y. T., Osher, S., & Schaeffer, H. (2024). Inexact Proximal Point Algorithms for Zeroth-Order Global Optimization. arXiv preprint arXiv:2412.11485.

Speakers
MS

Mikhail Solodov

IMPA
http://www.impa.br/~optim/solodov.html
avatar for Min Tao

Min Tao

Professor, Nanjing University
Tao Min's primary research interests lie in the theory and applications of optimization algorithms, with a particular focus on first-order methods and their applications in machine learning. Her representative works have been published in leading journals such as the SIAM Journal... Read More →
MZ

Minxin Zhang

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
Monday July 21, 2025 1:15pm - 2:30pm PDT
Taper Hall (THH) 215 3501 Trousdale Pkwy, 215, Los Angeles, CA 90089

Attendees (1)


Log in to save this to your schedule, view media, leave feedback and see who's attending!

Share Modal

Share this link via

Or copy link