Session: Algorithms for structured Riemannian optimization problems II
Chair: Jiang Hu
Cluster: Optimization on Manifolds
Talk 1: An Inexact Riemannian Proximal DC Algorithm for Nonsmooth Riemannian DC Optimization
Speaker: Bo Jiang
Abstract: In this talk, we address a new class of nonsmooth Riemannian optimization problems where the objective is the sum of a smooth function and a nonsmooth difference-of-convex (DC) function. We first present application examples demonstrating the equivalence between our considered DC formulation (with an appropriately chosen nonsmooth DC term) and its corresponding $\ell_0$-regularized or $\ell_0$-constrained Riemannian optimization problem. We then propose an inexact Riemannian proximal algorithmic framework for solving these nonsmooth Riemannian DC problems and show that the proposed framework can return an $\epsilon$-Riemannian critical point of the considered problems within $\mathcal{O}(\epsilon^{-2})$ iterations. To efficiently solve the subproblems in this framework, we propose to solve their corresponding regularized dual problems in an inexact but controllable fashion. Specifically, by carefully selecting inexact criteria and leveraging first-order methods, we develop a practical inexact Riemannian proximal algorithm and establish its overall iteration complexity. To the best of our knowledge, this is the first algorithm for solving the considered nonsmooth Riemannian DC optimization problem with such a theoretical guarantee. Numerical results on the sparse principal component analysis problem validate the effectiveness of the DC models and the efficiency of the proposed algorithms.
Talk 2: A Flexible Algorithmic Framework for Nonconvex-Linear Minimax Problems on Riemannian Manifolds
Speaker: Meng Xu
Abstract: Recently, there has been growing interest in minimax problems on Riemannian manifolds due to their wide applications in machine learning and signal processing. Although many algorithms have been developed for minimax problems in the Euclidean setting, relatively few works studied minimax problems on manifolds. In this talk, we focus on the nonconvex-linear minimax problem on Riemannian manifolds. We propose a flexible Riemannian alternating descent ascent algorithmic framework and prove that the proposed framework achieves the best-known iteration complexity known to date. Various customized simple yet efficient algorithms can be incorporated within the proposed algorithmic framework and applied to different problem scenarios. We also reveal intriguing similarities and differences between the algorithms developed within our proposed framework and existing algorithms, which provide important insights into why the former outperform the latter. Lastly, we report extensive numerical results on sparse principal component analysis (PCA), fair PCA, and sparse spectral clustering to demonstrate the superior performance of the proposed algorithms.
Talk 3: Low-Rank Tensor Learning by Generalized Nonconvex Regularization
Speaker: Xiongjun Zhang
Abstract: In this paper, we study the problem of low-rank tensor learning, where only a few of training samples are observed and the underlying tensor has a low-rank structure. The existing methods are based on the sum of nuclear norms of unfolding matrices of a tensor, which may be suboptimal. In order to explore the low-rankness of the underlying tensor effectively, we propose a nonconvex model based on transformed tensor nuclear norm for low-rank tensor learning. Specifically, a family of nonconvex functions are employed onto the singular values of all frontal slices of a tensor in the transformed domain to characterize the low-rankness of the underlying tensor. An error bound between the stationary point of the nonconvex model and the underlying tensor is established under restricted strong convexity on the loss function (such as least squares loss and logistic regression) and suitable regularity conditions on the nonconvex penalty function. By reformulating the nonconvex function into the difference of two convex functions, a proximal majorization-minimization (PMM) algorithm is designed to solve the resulting model. Then the global convergence and convergence rate of PMM are established under very mild conditions. Numerical experiments are conducted on tensor completion and binary classification to demonstrate the effectiveness of the proposed method over other state-of-the-art methods.