Session: Nonsmooth Optimization - Algorithms
Chair: Dânâ Davar
Cluster: nan
Talk 1: TRFD: A derivative-free trust-region method based on finite differences for composite nonsmooth optimization
Speaker: Dânâ Davar
Abstract: In this work we present TRFD, a derivative-free trust-region method based on finite differences for minimizing composite functions of the form $f(x)=h(F(x))$, where $F$ is a black-box function assumed to have a Lipschitz continuous Jacobian, and $h$ is a known convex Lipschitz function, possibly nonsmooth. The method approximates the Jacobian of $F$ via forward finite differences. We establish an upper bound for the number of evaluations of $F$ that TRFD requires to find an $\epsilon$-approximate stationary point. For L1 and Minimax problems, we show that our complexity bound reduces to $\mathcal{O}(n\epsilon^{-2})$ for specific instances of TRFD, where $n$ is the number of variables of the problem. Assuming that $h$ is monotone and that the components of $F$ are convex, we also establish a worst-case complexity bound, which reduces to $\mathcal{O}(n\epsilon^{-1})$ for Minimax problems. Numerical results are provided to illustrate the relative efficiency of TRFD in comparison with existing derivative-free solvers for composite nonsmooth optimization. This is a joint work with Geovani Grapiglia. The arXiv manuscript can be found at https://arxiv.org/abs/2410.09165
Talk 2: AN AUGMENTED LAGRANGIAN PRIMAL-DUAL SEMISMOOTH NEWTON METHOD FOR MULTI-BLOCK COMPOSITE OPTIMIZATION
Speaker: Zhanwang Deng
Abstract: In this talk, we develop a novel primal-dual semismooth Newton method for solving linearly constrained multi-block convex composite optimization problems which include commonly used models in image processing and conic programming. First, a differentiable augmented Lagrangian (AL) function is constructed by utilizing the Moreau envelopes of the nonsmooth functions. It enables us to derive an equivalent saddle point problem and establish the strong AL duality under the Slater’s condition. Consequently, a semismooth system of nonlinear equations is formulated to characterize the optimality of the original problem instead of the inclusion-form KKT conditions. We then develop a semismooth Newton method, called ALPDSN, which uses purely second-order steps and a nonmonotone line search based globalization strategy. Through a connection to the inexact first-order steps when the regularization parameter is sufficiently large, the global convergence of ALPDSN is established. Under the regularity conditions, partial smoothness, the local error bound, and the strict complementarity, we show that both the primal and the dual iteration sequences possess a superlinear convergence rate and provide concrete examples where these regularity conditions are met. Numerical results on the image restoration with two regularization terms, the corrected tensor nuclear norm problem and semidefinite programming on Mittelmann benchmark are presented to demonstrate the high efficiency and robustness of our ALPDSN. Furthermore, we will also design a software to solve a class of convex composite optimization in the future.
Talk 3: Anderson acceleration in nonsmooth problems: local convergence via active manifold identification
Speaker: Hao Wang
Abstract: Anderson acceleration is an effective technique for enhancing the efficiency of fixed- point iterations; however, analyzing its convergence in nonsmooth settings presents significant challenges. In this paper, we investigate a class of nonsmooth optimization algorithms characterized by the active manifold identification property. This class includes a diverse array of methods such as the proximal point method, proximal gradient method, proximal linear method, proximal coordinate descent method, Douglas-Rachford splitting (or the alternating direction method of multipliers), and the iteratively reweighted L1 method, among others. Under the assumption that the optimization problem possesses an active manifold at a stationary point, we establish a local R-linear convergence rate for the Anderson-accelerated algorithm. Our extensive numerical experiments further highlight the robust performance of the proposed Anderson-accelerated methods.