Session: Feasible and infeasible methods for optimization on manifolds II
Chair: Bin Gao
Cluster: Optimization on Manifolds
Talk 1: A decentralized proximal gradient tracking algorithm for composite optimization on Riemannian manifolds
Speaker: Lei Wang
Abstract: This talk focuses on minimizing a smooth function combined with a nonsmooth regularization term on a compact Riemannian submanifold embedded in the Euclidean space under a decentralized setting. Typically, there are two types of approaches at present for tackling such composite optimization problems. The first, subgradient-based approaches, rely on subgradient information of the objective function to update variables, achieving an iteration complexity of $O(\epsilon^{-4}\log^2(\epsilon^{-2}))$. The second, smoothing approaches, involve constructing a smooth approximation of the nonsmooth regularization term, resulting in an iteration complexity of $O(\epsilon^{-4})$. This paper proposes a proximal gradient type algorithm that fully exploits the composite structure. The global convergence to a stationary point is established with a significantly improved iteration complexity of $O(\epsilon^{-2})$. To validate the effectiveness and efficiency of our proposed method, we present numerical results from real-world applications, showcasing its superior performance compared to existing approaches.
Talk 2: A low-rank augmented Lagrangian method for SDP-RLT relaxations of mixed-binary quadratic programs
Speaker: Di Hou
Abstract: The mixed-binary quadratic program (MBQP) with both equality and inequality constraints is a well-known NP-hard problem that arises in various applications. In this work, we focus on two relaxations: the doubly nonnegative (DNN) relaxation and the SDP-RLT relaxation, which combines the Shor relaxation with a partial first-order reformulation-linearization technique (RLT). We demonstrate the equivalence of these two relaxations by introducing slack variables. Furthermore, we extend RNNAL—a globally convergent Riemannian augmented Lagrangian method (ALM) originally developed for solving DNN relaxations—to handle SDP-RLT relaxations. RNNAL penalizes the inequality constraints while keeping the equality constraints in the ALM subproblems. By applying a low-rank decomposition in each ALM subproblem, the feasible region is transformed into an algebraic variety with advantageous geometric properties for us to apply a Riemannian gradient descent method. Our algorithm can efficiently solve general semidefinite programming (SDP) problems, including relaxations for quadratically constrained quadratic programming (QCQP). Extensive numerical experiments confirm the efficiency of the proposed method.
Talk 3: An improved unconstrained approach for bilevel optimization
Speaker: Nachuan Xiao
Abstract: In this talk, we focus on the nonconvex-strongly-convex bilevel optimization problem (BLO). In this BLO, the objective function of the upper-level problem is nonconvex and possibly nonsmooth, and the lower-level problem is smooth and strongly convex with respect to the underlying variable. We show that the feasible region of BLO is a Riemannian manifold. Then we transform BLO to its corresponding unconstrained constraint dissolving problem (CDB), whose objective function is explicitly formulated from the objective functions in BLO. We prove that BLO is equivalent to the unconstrained optimization problem CDB. Therefore, various efficient unconstrained approaches, together with their theoretical results, can be directly applied to BLO through CDB. We propose a unified framework for developing subgradient-based methods for CDB. Remarkably, we show that several existing efficient algorithms can fit the unified framework and be interpreted as descent algorithms for CDB. These examples further demonstrate the great potential of our proposed approach.