Loading…
Type: Event clear filter
Sunday, July 20
 

6:30pm PDT

ICCOPT Conference Welcome Mixer (Tickets Needed)
Sunday July 20, 2025 6:30pm - 8:30pm PDT
Tickets Needed
Sunday July 20, 2025 6:30pm - 8:30pm PDT
California Science Center
 
Monday, July 21
 

8:45am PDT

Opening Remarks
Monday July 21, 2025 8:45am - 9:00am PDT
Monday July 21, 2025 8:45am - 9:00am PDT
USC Bovard Auditorium 3551 Trousdale Pkwy, Los Angeles, CA 90089

5:45pm PDT

Best Paper Session
Monday July 21, 2025 5:45pm - 7:15pm PDT
Finalists (in alphabetical order):

Guy Kornowski for the paper "Oracle Complexity in Nonsmooth Nonconvex Optimization”, co-authored with Ohad Shamir.
Abstract: It is well-known that given a smooth, bounded-from-below, and possibly nonconvex function, standard gradient-based methods can find $\epsilon$-stationary points (with gradient norm less than $\epsilon$) in $\mathcal{O}(1/\epsilon^2)$ iterations. However, many important nonconvex optimization problems, such as those associated with training modern neural networks, are inherently not smooth, making these results inapplicable. In this paper, we study nonsmooth nonconvex optimization from an oracle complexity viewpoint, where the algorithm is assumed to be given access only to local information about the function at various points. We provide two main results: First, we consider the problem of getting \emph{near} $\epsilon$-stationary points. This is perhaps the most natural relaxation of \emph{finding} $\epsilon$-stationary points, which is impossible in the nonsmooth nonconvex case. We prove that this relaxed goal cannot be achieved efficiently, for any distance and $\epsilon$ smaller than some constants. Our second result deals with the possibility of tackling nonsmooth nonconvex optimization by reduction to smooth optimization: Namely, applying smooth optimization methods on a smooth approximation of the objective function. For this approach, we prove under a mild assumption an inherent trade-off between oracle complexity and smoothness: On the one hand, smoothing a nonsmooth nonconvex function can be done very efficiently (e.g., by randomized smoothing), but with dimension-dependent factors in the smoothness parameter, which can strongly affect iteration complexity when plugging into standard smooth optimization methods. On the other hand, these dimension factors can be eliminated with suitable smoothing methods, but only by making the oracle complexity of the smoothing process exponentially large.

Naoki Marumo for the paper “Parameter-free Accelerated Gradient Descent for nonconvex optimization", co-authored with Akiko Takeda.
Abstract: We propose a new first-order method for minimizing nonconvex functions with a Lipschitz continuous gradient and Hessian. The proposed method is an accelerated gradient descent with two restart mechanisms and finds a solution where the gradient norm is less than $\epsilon$ in $O(\epsilon^{-7/4})$ function and gradient evaluations. Unlike existing first-order methods with similar complexity bounds, our algorithm is parameter-free because it requires no prior knowledge of problem-dependent parameters, e.g., the Lipschitz constants and the target accuracy $\epsilon$. The main challenge in achieving this advantage is estimating the Lipschitz constant of the Hessian using only first-order information. To this end, we develop a new Hessian-free analysis based on two technical inequalities: a Jensen-type inequality for gradients and an error bound for the trapezoidal rule. Several numerical results illustrate that the proposed method performs comparably to existing algorithms with similar complexity bounds, even without parameter tuning.

Lai Tian for the paper "Testing Approximate Stationarity Concepts for Piecewise Affine Functions", co-authored with Anthony Man-Cho So.
Abstract: We study the basic computational problem of detecting approximate stationary points for continuous piecewise affine (PA) functions. Our contributions span multiple aspects, including complexity, regularity, and algorithms. Specifically, we show that testing first-order approximate stationarity concepts, as defined by commonly used generalized subdifferentials, is computationally intractable unless $\cP=\cNP$. To facilitate computability, we consider a polynomial-time solvable relaxation by abusing the convex subdifferential sum rule and establish a tight characterization of its exactness. Furthermore, addressing an open issue motivated by the need to terminate the subgradient method in finite time, we introduce the first oracle-polynomial-time algorithm to detect so-called near-approximate stationary points for PA functions. A notable byproduct of our development in regularity is the first necessary and sufficient condition for the validity of an equality-type (Clarke) subdifferential sum rule. Our techniques revolve around two new geometric notions for convex polytopes and may be of independent interest in nonsmooth analysis. Moreover, some corollaries of our work on complexity and algorithms for stationarity testing address open questions in the literature. To demonstrate the versatility of our results, we complement our findings with applications to a series of structured piecewise smooth functions, including $\rho$-margin-loss SVM, piecewise affine regression, and nonsmooth neural networks.

Speakers
NM

Naoki Marumo

Name: Dr. Slothington "Slow Convergence" McNapfaceTitle: Distinguished Professor of Continuous Optimization & Energy MinimizationAffiliation: The Lush Canopy Institute of Sluggish AlgorithmsBio:Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
avatar for Guy Kornowski

Guy Kornowski

Weizmann Institute of Science
Talk title:Dimension dependence in nonconvex optimizationBio:Guy Kornowski is a PhD student at the Weizmann Institute of Science, advised by Prof. Ohad Shamir. During his PhD he interned at Apple ML Research, where he worked with Kunal Talwar and Vitaly Feldman. His research focuses... Read More →
Monday July 21, 2025 5:45pm - 7:15pm PDT
Taper Hall (THH) 101 3501 Trousdale Pkwy, 101, Los Angeles, CA 90089
 
Tuesday, July 22
 

5:45pm PDT

Poster Session
Tuesday July 22, 2025 5:45pm - 7:15pm PDT
Poster Session

Poster 1: Weighted Data Valuation for Statistical Learning via DC Programming Methods
Presenter: Hassan Nojavan
Abstract: We propose a new formulation of empirical risk minimization that accounts for the weights of data points. We reformulate the problem as difference-of-convex (DC) and bi-convex programs and apply suitable algorithms, including the DC algorithm and Alternate Convex Search (ACS). The proposed methods are applied to regression settings for outlier detection and the top N recommender system problem for data valuation. Our numerical experiments demonstrate that the proposed approaches consistently deliver high-quality solutions, outperforming existing methods used in practice, while effectively identifying data anomalies.

Poster 2: On the Infeasibility of Convex QCQPs
Presenter: Matias Villagra
Abstract: This poster presents preliminary research on fundamental questions regarding the computational complexity of infeasible convex quadratically constrained quadratic programs (QCQPs) within the Turing model of computation.Given an infeasible system $\Sigma$ of convex quadratic inequalities $\{ f_{i}(x) \leq 0, \forall i \in [m] \}$, we say that $\Sigma$ is an Irreducible Inconsistent Subsystem (IIS) if after removing any constraint $f_{j}(x) \leq 0$ from $\Sigma$, the subsystem becomes feasible. Our goal is to understand whether, given an IIS $\Sigma$, we can exhibit a polynomial-sized certificate, on the length of $\Sigma$, which proves that $\Sigma$ is infeasible.A natural way to address this question is to understand the complexity of the minimum infeasibility (MINF) for $\Sigma$, which can be defined as
\begin{equation*}
s^* := \inf \left\{ s \in \R_{+} : f_{i}(x) \leq s, \forall i \in [m] \right\}.
\end{equation*}
The so-called fundamental theorem for convex QCQPs (Terlaky 1985, Luo and Zhang 1999) tells us that if a convex QCQP is bounded below, then the infimum is always attained. But, is MINF well defined under the Turing model of computation? In other words, is the size of $s^*$ always bounded by a polynomial on the bit length of $\Sigma$? We present partial results for this question, along with alternative methods for certifying the infeasibility of $\Sigma$.
This is joint work with Daniel Bienstock.

Poster 3: Enhancing Convergence of Decentralized Gradient Tracking under the KL Property
Presenter: Xiaokai Chen
Abstract: We study decentralized multiagent optimization over networks, modeled as undirected graphs. The optimization problem consists of minimizing a nonconvex smooth function plus a convex extended-value function, which enforces constraints or extra structure on the solution (e.g., sparsity, low-rank). We further assume that the objective function satisfies the Kurdyka-Łojasiewicz (KL) property, with given exponent θ∈[0,1). The KL property is satisfied by several (nonconvex) functions of practical interest, e.g., arising from machine learning applications; in the centralized setting, it permits to achieve strong convergence guarantees. Here we establish convergence of the same type for the notorious decentralized gradient-tracking-based algorithm SONATA. Specifically, (i) when θ∈(0,1/2], the sequence generated by SONATA converges to a stationary solution of the problem at R-linear rate;(ii)when θ∈(1/2,1), sublinear rate is certified; and finally (iii) when θ=0, the iterates will either converge in a finite number of steps or converges at R-linear rate. This matches the convergence behavior of centralized proximal-gradient algorithms except when θ=0. Numerical results validate our theoretical findings.

Poster 4: The Nonconvex Riemannian Proximal Gradient Method
Presenter: Paula J. John
Abstract: We consider a class of nonconvex optimization problems over a Riemannian manifold, where the objective is a sum of a smooth and a possibly nonsmooth function. Our work introduces a new approach to Riemannian adaptations of the proximal gradient method. The algorithm has a straightforward implementation and does not require any computation in the embedding space or solving of subproblems on the tangent space. This is achieved by first performing a gradient step and then applying a proximal operator directly on the manifold. We present numerical examples showing that this method finds applications in different Riemannian optimization problems. This is joint work with Ronny Bergmann, Hajg Jasa, and Max Pfeffer.

Poster 5: A Stochastic Approach to the Subset Selection Problem via Mirror Descent
Presenter: Dan Greenstein
Abstract: The subset selection problem is fundamental in machine learning and other fields of computer science.
We introduce a stochastic formulation for the minimum cost subset selection problem in a black box setting, in which only the subset metric value is available.
Subsequently, we can handle two-stage schemes, with an outer subset-selection component and an inner subset cost evaluation component. We propose formulating the subset selection problem in a stochastic manner by choosing subsets at random from a distribution whose parameters are learned. Two stochastic formulations are proposed.
The first explicitly restricts the subset's cardinality, and the second yields the desired cardinality in expectation.
The distribution is parameterized by a decision variable, which we optimize using Stochastic Mirror Descent.
Our choice of distributions yields constructive closed-form unbiased stochastic gradient formulas and convergence guarantees, including a rate with favorable dependency on the problem parameters.
Empirical evaluation of selecting a subset of layers in transfer learning complements our theoretical findings and demonstrates the potential benefits of our approach.

Poster 6: Directional Smoothness and Gradient Methods: Convergence and Adaptivity
Presenter: Aaron Mishkin
Abstract: We develop new sub-optimality bounds for gradient descent (GD) that depend on the conditioning of the objective along the path of optimization rather than on global, worst-case constants. Key to our proofs is directional smoothness, a measure of gradient variation that we use to develop upper-bounds on the objective. Minimizing these upper-bounds requires solving implicit equations to obtain a sequence of strongly adapted step-sizes; we show that these equations are straightforward to solve for convex quadratics and lead to new guarantees for two classical step-sizes. For general functions, we prove that the Polyak step-size and normalized GD obtain fast, path-dependent rates despite using no knowledge of the directional smoothness. Experiments on logistic regression show our convergence guarantees are tighter than the classical theory based on L-smoothness.

Poster 7: The Convex Riemannian Proximal Gradient Method
Presenter: Hajg Jasa
Abstract: We consider a class of (strongly) geodesically convex optimization problems on Hadamard manifolds, where the objective function splits into the sum of a smooth and a possibly nonsmooth function. We introduce an intrinsic convex Riemannian proximal gradient (CRPG) method that employs the manifold proximal map for the nonsmooth step, without operating in the embedding or in a tangent space. We establish a sublinear convergence rate for convex problems and a linear convergence rate for strongly convex problems, and derive fundamental prox-grad inequalities that generalize the Euclidean case. Our numerical experiments on hyperbolic spaces and manifolds of symmetric positive definite matrices demonstrate substantial computational advantages over existing methods. This is joint work with Ronny Bergmann, Paula J. John, and Max Pfeffer.

Poster 8: DCatalyst: A
Speakers
XC

Xiaokai Chen

Name: Dr. Slothington "Slow Convergence" McNapfaceTitle: Distinguished Professor of Continuous Optimization & Energy MinimizationAffiliation: The Lush Canopy Institute of Sluggish AlgorithmsBio:Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
PJ

Paula J. John

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
DG

Dan Greenstein

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
AM

Aaron Mishkin

Name: Dr. Slothington "Slow Convergence" McNapfaceTitle: Distinguished Professor of Continuous Optimization & Energy MinimizationAffiliation: The Lush Canopy Institute of Sluggish AlgorithmsBio:Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
HJ

Hajg Jasa

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
TC

Tianyu Cao

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
HG

Henry Graven

Name: Dr. Slothington "Slow Convergence" McNapfaceTitle: Distinguished Professor of Continuous Optimization & Energy MinimizationAffiliation: The Lush Canopy Institute of Sluggish AlgorithmsBio:Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
ZZ

Zhiyuan Zhang

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
JB

Jialu Bao

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
CC

Can Chen

Name: Dr. Slothington "Slow Convergence" McNapfaceTitle: Distinguished Professor of Continuous Optimization & Energy MinimizationAffiliation: The Lush Canopy Institute of Sluggish AlgorithmsBio:Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
CH

Chengpiao Huang

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
SH

Smajil Halilovic

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
JW

Jie Wang

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
EN

Edward Nguyen

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
Tuesday July 22, 2025 5:45pm - 7:15pm PDT
Olin Hall of Engineering (OHE) Patio 3650 McClintock Ave, Los Angeles, CA 90089
 
Wednesday, July 23
 

6:30pm PDT

Conference banquet (ADVANCED PURCHASE REQUIRED)
Wednesday July 23, 2025 6:30pm - 9:00pm PDT
Wednesday July 23, 2025 6:30pm - 9:00pm PDT
Town & Gown USC 665 W Exposition Blvd, Los Angeles, CA 90089
 
Thursday, July 24
 

8:45am PDT

Last Day Remarks
Thursday July 24, 2025 8:45am - 9:00am PDT
Thursday July 24, 2025 8:45am - 9:00am PDT
USC Bovard Auditorium 3551 Trousdale Pkwy, Los Angeles, CA 90089

5:30pm PDT

End of Conference
Thursday July 24, 2025 5:30pm - 5:30pm PDT
Thursday July 24, 2025 5:30pm - 5:30pm PDT
TBA
 
  • Filter By Date
  • Filter By Venue
  • Filter By Type
  • Timezone


Share Modal

Share this link via

Or copy link

Filter sessions
Apply filters to sessions.