Loading…
Thursday July 24, 2025 4:15pm - 5:30pm PDT
Session: Sparsity Optimization
Chair: Jiachang Liu
Cluster: nan

Talk 1: (Canceled) A PATH-BASED APPROACH TO CONSTRAINED SPARSE OPTIMIZATION
Speaker: Nadav Hallak
Abstract: This talk presents a path-based approach for the minimization of a continuously differentiable function over sparse symmetric sets, which is a hard problem that exhibits a restrictiveness-hierarchy of necessary optimality conditions. To achieve the more restrictive conditions in the hierarchy, contemporary methods require a support optimization oracle that must exactly solve the problem in smaller dimensions. The presented path-based approach achieves a path-based optimality condition, which is well placed in the restrictiveness-hierarchy, and a method to achieve it that does not require a support optimization oracle and, moreover, is projection-free. New results for the regularized linear minimization problem over sparse symmetric sets and their implications are also presented.

Talk 2: Scalable First-order Method for Certifying Optimal k-Sparse GLMs
Speaker: Jiachang Liu
Abstract: This paper investigates the problem of certifying optimality for sparse generalized linear models (GLMs), where sparsity is enforced through an l0 cardinality constraint. While branch-and-bound (BnB) frameworks can certify optimality by pruning nodes using dual bounds, existing methods for computing these bounds are either computationally intensive or exhibit slow convergence, limiting their scalability to large-scale problems. To address this challenge, we propose a first-order proximal gradient algorithm designed to solve the perspective relaxation of the problem within a BnB framework. Specifically, we formulate the relaxed problem as a composite optimization problem and demonstrate that the proximal operator of the non-smooth component can be computed exactly in log-linear time complexity, eliminating the need to solve a computationally expensive second-order cone program. Furthermore, we introduce a simple restart strategy that enhances convergence speed while maintaining low per-iteration complexity. Extensive experiments on synthetic and real-world datasets show that our approach significantly accelerates dual bound computations and is highly effective in providing optimality certificates for large-scale problems. This is joint work with Andrea Lodi and Soroosh Shafiee. A preprint of this work can be found at https://arxiv.org/abs/2502.09502.

Talk 3: Sparse quadratic optimization via cardinality constraints
Speaker: Ademir Ribeiro
Abstract: We study the sparse regularized quadratic optimization problem where the sparsity level is controlled by means of the cardinality constraints. However, due to the existence of this kind of constraint (which is not continuous neither convex), it is very difficult to directly solve the problem. Recently, several researchers have addressed these type of problems using a common strategy, a continuous relaxation reformulation of the problem. In this work we propose a different approach in which the feasible set of the problem is decomposed into simpler sets, all of them meeting the cardinality constraint. Then, in order to overcome the combinatorial difficulty induced by this decomposition, we make use of an auxiliary and simple problem of maximizing a concave piecewise quadratic function. The solution of this problem, obtained by a subgradient method, is then used to find the solution of the original problem. Numerical experiments show that this strategy may be successful for a significant number of problems. Throughout this work we establish nonlinear optimization results in order to provide a rigorous mathematical analysis of the ideas involving the reformulation of problem, the proposed method and its convergence properties. Beck A, Eldar YC (2013) Sparsity constrained nonlinear optimization: Optimality conditions and algorithms. SIAM J. Optim. 23(3):1480–1509. Nesterov Y (2004) Introductory Lectures on Convex Optimization – Basic Course (Kluwer Academic Publishers). Vreugdenhil R, Nguyen VA, Eftekhari A, Esfahani PM (2021) Principal component hierarchy for sparse quadratic programs. Meila M, Zhang T, eds., Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, 10607–10616 (PMLR).

Speakers
NH

Nadav Hallak

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
avatar for Jiachang Liu

Jiachang Liu

Assistant Research Professor, Cornell University
I am an assistant research professor (postdoc) at the Center for Data Science for Enterprise and Society (CDSES) at Cornell University. My hosts are Professor Andrea Lodi and Professor Soroosh Shafiee.Prior to joining Cornell, I completed my Ph.D. in Electrical and Computer Engineering... Read More →
AR

Ademir Ribeiro

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
Thursday July 24, 2025 4:15pm - 5:30pm PDT
Taper Hall (THH) 210 3501 Trousdale Pkwy, 210, Los Angeles, CA 90089

Log in to save this to your schedule, view media, leave feedback and see who's attending!

Share Modal

Share this link via

Or copy link