Loading…
Thursday July 24, 2025 4:15pm - 5:30pm PDT
Session: Sparse and Low-rank Optimization
Chair: Ishy Zagdoun
Cluster: nan

Talk 1: On solving a rank regularized minimization problem via equivalent factorized column-sparse regularized models
Speaker: Wenjing Li
Abstract: Rank regularized minimization problem is an ideal model for the low-rank matrix completion/recovery problem. The matrix factorization approach can transform the high-dimensional rank regularized problem to a low-dimensional factorized column-sparse regularized problem. The latter can greatly facilitate fast computations in applicable algorithms, but needs to overcome the simultaneous non-convexity of the loss and regularization functions. In this talk, we consider the factorized column-sparse regularized model. Firstly, we optimize this model with bound constraints, and establish a certain equivalence between the optimized factorization problem and rank regularized problem. Further, we strengthen the optimality condition for stationary points of the factorization problem and define the notion of strong stationary point. Moreover, we establish the equivalence between the factorization problem and its a nonconvex relaxation in the sense of global minimizers and strong stationary points. To solve the factorization problem, we design two types of algorithms and give an adaptive method to reduce their computation. The first algorithm is from the relaxation point of view and its iterates own some properties from global minimizers of the factorization problem after finite iterations. We give some analysis on the convergence of its iterates to the strong stationary point. The second algorithm is designed for directly solving the factorization problem. We improve the PALM algorithm introduced by Bolte et al. (Math Program Ser A 146:459-494, 2014) for the factorization problem and give its improved convergence results. Finally, we conduct numerical experiments to show the promising performance of the proposed model and algorithms for low-rank matrix completion. The corresponding publication is Math Program Ser A, 2024, doi: 10.1007/s10107-024-02103-1.

Talk 2: Convergence of accelerated singular value shrinkage algorithm for nonconvex low-rank matrix regularization problem
Speaker: LIU Yanyan
Abstract: This paper develops a new framework to design and analyse singular value shrinkage algorithm and its variants built on the notion of singular value shrinkage operator for the nonconvex low-rank regularization problem. The truncation technique, Nesterov's acceleration and heavy-ball method are chosen to accelerate traditional singular value shrinkage algorithm. We establish their convergence to an apporixmate true low-rank solution of nonconvex regularization problem under restricted isometry condition and some mild parameter assumptions. Numerical results based on sythetical data and real data show that the proposed algorithms are competitive to the state-of-the-art algorithms in terms of efficiency and accuracy.

Talk 3: Projecting onto a Capped Rotated Second-Order Cone for Sparse Optimization
Speaker: Ishy Zagdoun
Abstract: This paper presents a closed-form expression for the projection onto a capped rotated second-order cone, a convex set that emerges as part of the Boolean relaxation in sparse regression problems and, more broadly, in the perspective relaxation of mixed-integer nonlinear programs (MINLP) with binary indicator variables. The closed-form expression is divided into three cases, one of which simplifies to the projection onto a standard second-order cone (a widely used projection with a well-known solution involving three additional cases). The nontrivial solutions for the other two cases include the necessary and sufficient conditions for when the projection lies along the intersection of the rotated cone and a facet of a box. The ability to rapidly compute the projection onto a capped rotated second-order cone facilitates the development of efficient methods for solving the continuous relaxation of sparse optimization problems. These problems typically involve a Cartesian product of many such sets. Important machine learning tasks that benefit from this closed form expression include solving the continuous relaxation of a sparse regression using projected gradient methods and the accelerated variant of this method (FISTA). Since the closed-form expression is derived in a general manner, it can be seamlessly extended to regression problems with group sparsity constraints, involving cones of a dimension beyond the three-dimensional cone used in standard sparsity penalties and constraints.

Speakers
avatar for Wenjing Li

Wenjing Li

Associate Researcher, Harbin Institute of Technology
Name: Dr. Wenjing LiTitle: Associate ResearcherAffiliation: Harbin Institute of TechnologyBio:Dr. Wenjing Li is an associate researcher at Harbin Institute of Technology. Under the guidance of Prof. Wei Bian, she obtained her Master's and PhD degrees from the School of Mathematics... Read More →
LY

LIU Yanyan

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
Thursday July 24, 2025 4:15pm - 5:30pm PDT
Taper Hall (THH) 212 3501 Trousdale Pkwy, 212, Los Angeles, CA 90089

Attendees (1)


Log in to save this to your schedule, view media, leave feedback and see who's attending!

Share Modal

Share this link via

Or copy link