Loading…
Type: Event clear filter
arrow_back View All Dates
Tuesday, July 22
 

5:45pm PDT

Poster Session
Tuesday July 22, 2025 5:45pm - 7:15pm PDT
Poster Session

Poster 1: Weighted Data Valuation for Statistical Learning via DC Programming Methods
Presenter: Hassan Nojavan
Abstract: We propose a new formulation of empirical risk minimization that accounts for the weights of data points. We reformulate the problem as difference-of-convex (DC) and bi-convex programs and apply suitable algorithms, including the DC algorithm and Alternate Convex Search (ACS). The proposed methods are applied to regression settings for outlier detection and the top N recommender system problem for data valuation. Our numerical experiments demonstrate that the proposed approaches consistently deliver high-quality solutions, outperforming existing methods used in practice, while effectively identifying data anomalies.

Poster 2: On the Infeasibility of Convex QCQPs
Presenter: Matias Villagra
Abstract: This poster presents preliminary research on fundamental questions regarding the computational complexity of infeasible convex quadratically constrained quadratic programs (QCQPs) within the Turing model of computation.Given an infeasible system $\Sigma$ of convex quadratic inequalities $\{ f_{i}(x) \leq 0, \forall i \in [m] \}$, we say that $\Sigma$ is an Irreducible Inconsistent Subsystem (IIS) if after removing any constraint $f_{j}(x) \leq 0$ from $\Sigma$, the subsystem becomes feasible. Our goal is to understand whether, given an IIS $\Sigma$, we can exhibit a polynomial-sized certificate, on the length of $\Sigma$, which proves that $\Sigma$ is infeasible.A natural way to address this question is to understand the complexity of the minimum infeasibility (MINF) for $\Sigma$, which can be defined as
\begin{equation*}
s^* := \inf \left\{ s \in \R_{+} : f_{i}(x) \leq s, \forall i \in [m] \right\}.
\end{equation*}
The so-called fundamental theorem for convex QCQPs (Terlaky 1985, Luo and Zhang 1999) tells us that if a convex QCQP is bounded below, then the infimum is always attained. But, is MINF well defined under the Turing model of computation? In other words, is the size of $s^*$ always bounded by a polynomial on the bit length of $\Sigma$? We present partial results for this question, along with alternative methods for certifying the infeasibility of $\Sigma$.
This is joint work with Daniel Bienstock.

Poster 3: Enhancing Convergence of Decentralized Gradient Tracking under the KL Property
Presenter: Xiaokai Chen
Abstract: We study decentralized multiagent optimization over networks, modeled as undirected graphs. The optimization problem consists of minimizing a nonconvex smooth function plus a convex extended-value function, which enforces constraints or extra structure on the solution (e.g., sparsity, low-rank). We further assume that the objective function satisfies the Kurdyka-Łojasiewicz (KL) property, with given exponent θ∈[0,1). The KL property is satisfied by several (nonconvex) functions of practical interest, e.g., arising from machine learning applications; in the centralized setting, it permits to achieve strong convergence guarantees. Here we establish convergence of the same type for the notorious decentralized gradient-tracking-based algorithm SONATA. Specifically, (i) when θ∈(0,1/2], the sequence generated by SONATA converges to a stationary solution of the problem at R-linear rate;(ii)when θ∈(1/2,1), sublinear rate is certified; and finally (iii) when θ=0, the iterates will either converge in a finite number of steps or converges at R-linear rate. This matches the convergence behavior of centralized proximal-gradient algorithms except when θ=0. Numerical results validate our theoretical findings.

Poster 4: The Nonconvex Riemannian Proximal Gradient Method
Presenter: Paula J. John
Abstract: We consider a class of nonconvex optimization problems over a Riemannian manifold, where the objective is a sum of a smooth and a possibly nonsmooth function. Our work introduces a new approach to Riemannian adaptations of the proximal gradient method. The algorithm has a straightforward implementation and does not require any computation in the embedding space or solving of subproblems on the tangent space. This is achieved by first performing a gradient step and then applying a proximal operator directly on the manifold. We present numerical examples showing that this method finds applications in different Riemannian optimization problems. This is joint work with Ronny Bergmann, Hajg Jasa, and Max Pfeffer.

Poster 5: A Stochastic Approach to the Subset Selection Problem via Mirror Descent
Presenter: Dan Greenstein
Abstract: The subset selection problem is fundamental in machine learning and other fields of computer science.
We introduce a stochastic formulation for the minimum cost subset selection problem in a black box setting, in which only the subset metric value is available.
Subsequently, we can handle two-stage schemes, with an outer subset-selection component and an inner subset cost evaluation component. We propose formulating the subset selection problem in a stochastic manner by choosing subsets at random from a distribution whose parameters are learned. Two stochastic formulations are proposed.
The first explicitly restricts the subset's cardinality, and the second yields the desired cardinality in expectation.
The distribution is parameterized by a decision variable, which we optimize using Stochastic Mirror Descent.
Our choice of distributions yields constructive closed-form unbiased stochastic gradient formulas and convergence guarantees, including a rate with favorable dependency on the problem parameters.
Empirical evaluation of selecting a subset of layers in transfer learning complements our theoretical findings and demonstrates the potential benefits of our approach.

Poster 6: Directional Smoothness and Gradient Methods: Convergence and Adaptivity
Presenter: Aaron Mishkin
Abstract: We develop new sub-optimality bounds for gradient descent (GD) that depend on the conditioning of the objective along the path of optimization rather than on global, worst-case constants. Key to our proofs is directional smoothness, a measure of gradient variation that we use to develop upper-bounds on the objective. Minimizing these upper-bounds requires solving implicit equations to obtain a sequence of strongly adapted step-sizes; we show that these equations are straightforward to solve for convex quadratics and lead to new guarantees for two classical step-sizes. For general functions, we prove that the Polyak step-size and normalized GD obtain fast, path-dependent rates despite using no knowledge of the directional smoothness. Experiments on logistic regression show our convergence guarantees are tighter than the classical theory based on L-smoothness.

Poster 7: The Convex Riemannian Proximal Gradient Method
Presenter: Hajg Jasa
Abstract: We consider a class of (strongly) geodesically convex optimization problems on Hadamard manifolds, where the objective function splits into the sum of a smooth and a possibly nonsmooth function. We introduce an intrinsic convex Riemannian proximal gradient (CRPG) method that employs the manifold proximal map for the nonsmooth step, without operating in the embedding or in a tangent space. We establish a sublinear convergence rate for convex problems and a linear convergence rate for strongly convex problems, and derive fundamental prox-grad inequalities that generalize the Euclidean case. Our numerical experiments on hyperbolic spaces and manifolds of symmetric positive definite matrices demonstrate substantial computational advantages over existing methods. This is joint work with Ronny Bergmann, Paula J. John, and Max Pfeffer.

Poster 8: DCatalyst: A
Speakers
XC

Xiaokai Chen

Name: Dr. Slothington "Slow Convergence" McNapfaceTitle: Distinguished Professor of Continuous Optimization & Energy MinimizationAffiliation: The Lush Canopy Institute of Sluggish AlgorithmsBio:Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
PJ

Paula J. John

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
DG

Dan Greenstein

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
AM

Aaron Mishkin

Name: Dr. Slothington "Slow Convergence" McNapfaceTitle: Distinguished Professor of Continuous Optimization & Energy MinimizationAffiliation: The Lush Canopy Institute of Sluggish AlgorithmsBio:Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
HJ

Hajg Jasa

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
TC

Tianyu Cao

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
HG

Henry Graven

Name: Dr. Slothington "Slow Convergence" McNapfaceTitle: Distinguished Professor of Continuous Optimization & Energy MinimizationAffiliation: The Lush Canopy Institute of Sluggish AlgorithmsBio:Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
ZZ

Zhiyuan Zhang

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
JB

Jialu Bao

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
CC

Can Chen

Name: Dr. Slothington "Slow Convergence" McNapfaceTitle: Distinguished Professor of Continuous Optimization & Energy MinimizationAffiliation: The Lush Canopy Institute of Sluggish AlgorithmsBio:Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
CH

Chengpiao Huang

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
SH

Smajil Halilovic

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
JW

Jie Wang

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
EN

Edward Nguyen

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
Tuesday July 22, 2025 5:45pm - 7:15pm PDT
Olin Hall of Engineering (OHE) Patio 3650 McClintock Ave, Los Angeles, CA 90089
 
  • Filter By Date
  • Filter By Venue
  • Filter By Type
  • Timezone


Share Modal

Share this link via

Or copy link

Filter sessions
Apply filters to sessions.
Filtered by Date -