Loading…
Type: Event clear filter
Sunday, July 20
 

6:30pm PDT

ICCOPT Conference Welcome Mixer (Conference badges needed)
Conference ID badge needed for entry
Sunday July 20, 2025 6:30pm - 8:30pm PDT
California Science Center
 
Monday, July 21
 

8:45am PDT

Opening Remarks
Monday July 21, 2025 8:45am - 9:00am PDT
USC Bovard Auditorium 3551 Trousdale Pkwy, Los Angeles, CA 90089

5:45pm PDT

Best Paper Session
Finalists (in alphabetical order):

Guy Kornowski for the paper "Oracle Complexity in Nonsmooth Nonconvex Optimization”, co-authored with Ohad Shamir.
Abstract: It is well-known that given a smooth, bounded-from-below, and possibly nonconvex function, standard gradient-based methods can find $\epsilon$-stationary points (with gradient norm less than $\epsilon$) in $\mathcal{O}(1/\epsilon^2)$ iterations. However, many important nonconvex optimization problems, such as those associated with training modern neural networks, are inherently not smooth, making these results inapplicable. In this paper, we study nonsmooth nonconvex optimization from an oracle complexity viewpoint, where the algorithm is assumed to be given access only to local information about the function at various points. We provide two main results: First, we consider the problem of getting \emph{near} $\epsilon$-stationary points. This is perhaps the most natural relaxation of \emph{finding} $\epsilon$-stationary points, which is impossible in the nonsmooth nonconvex case. We prove that this relaxed goal cannot be achieved efficiently, for any distance and $\epsilon$ smaller than some constants. Our second result deals with the possibility of tackling nonsmooth nonconvex optimization by reduction to smooth optimization: Namely, applying smooth optimization methods on a smooth approximation of the objective function. For this approach, we prove under a mild assumption an inherent trade-off between oracle complexity and smoothness: On the one hand, smoothing a nonsmooth nonconvex function can be done very efficiently (e.g., by randomized smoothing), but with dimension-dependent factors in the smoothness parameter, which can strongly affect iteration complexity when plugging into standard smooth optimization methods. On the other hand, these dimension factors can be eliminated with suitable smoothing methods, but only by making the oracle complexity of the smoothing process exponentially large.

Naoki Marumo for the paper “Parameter-free Accelerated Gradient Descent for nonconvex optimization", co-authored with Akiko Takeda.
Abstract: We propose a new first-order method for minimizing nonconvex functions with a Lipschitz continuous gradient and Hessian. The proposed method is an accelerated gradient descent with two restart mechanisms and finds a solution where the gradient norm is less than $\epsilon$ in $O(\epsilon^{-7/4})$ function and gradient evaluations. Unlike existing first-order methods with similar complexity bounds, our algorithm is parameter-free because it requires no prior knowledge of problem-dependent parameters, e.g., the Lipschitz constants and the target accuracy $\epsilon$. The main challenge in achieving this advantage is estimating the Lipschitz constant of the Hessian using only first-order information. To this end, we develop a new Hessian-free analysis based on two technical inequalities: a Jensen-type inequality for gradients and an error bound for the trapezoidal rule. Several numerical results illustrate that the proposed method performs comparably to existing algorithms with similar complexity bounds, even without parameter tuning.

Lai Tian for the paper "Testing Approximate Stationarity Concepts for Piecewise Affine Functions", co-authored with Anthony Man-Cho So.
Abstract: We study the basic computational problem of detecting approximate stationary points for continuous piecewise affine (PA) functions. Our contributions span multiple aspects, including complexity, regularity, and algorithms. Specifically, we show that testing first-order approximate stationarity concepts, as defined by commonly used generalized subdifferentials, is computationally intractable unless $\cP=\cNP$. To facilitate computability, we consider a polynomial-time solvable relaxation by abusing the convex subdifferential sum rule and establish a tight characterization of its exactness. Furthermore, addressing an open issue motivated by the need to terminate the subgradient method in finite time, we introduce the first oracle-polynomial-time algorithm to detect so-called near-approximate stationary points for PA functions. A notable byproduct of our development in regularity is the first necessary and sufficient condition for the validity of an equality-type (Clarke) subdifferential sum rule. Our techniques revolve around two new geometric notions for convex polytopes and may be of independent interest in nonsmooth analysis. Moreover, some corollaries of our work on complexity and algorithms for stationarity testing address open questions in the literature. To demonstrate the versatility of our results, we complement our findings with applications to a series of structured piecewise smooth functions, including $\rho$-margin-loss SVM, piecewise affine regression, and nonsmooth neural networks.

Speakers
avatar for Naoki Marumo

Naoki Marumo

Project Research Associate, University of Tokyo
Talks:"Parameter-Free Accelerated Gradient Descent for Nonconvex Minimization"in Best Paper SessionMonday July 21, 2025 5:45pm - 7:15pm"Heavy-ball ODE converges at rate $O(T^{-4/7})$ with averaging for nonconvex functions"in Parallel Sessions 8N: Recent advances in first-order methodsWednesday... Read More →
avatar for Guy Kornowski

Guy Kornowski

Weizmann Institute of Science
Talk title:Dimension dependence in nonconvex optimizationBio:Guy Kornowski is a PhD student at the Weizmann Institute of Science, advised by Prof. Ohad Shamir. During his PhD he interned at Apple ML Research, where he worked with Kunal Talwar and Vitaly Feldman. His research focuses... Read More →
Monday July 21, 2025 5:45pm - 7:15pm PDT
Taper Hall (THH) 101 3501 Trousdale Pkwy, 101, Los Angeles, CA 90089
 
Tuesday, July 22
 

5:45pm PDT

Poster Session
Poster Session (Please see attached files for abstracts)

Poster 1: Weighted Data Valuation for Statistical Learning via DC Programming Methods
Presenter: Hassan Nojavan

Poster 2: On the Infeasibility of Convex QCQPs
Presenter: Matias Villagra

Poster 3: Enhancing Convergence of Decentralized Gradient Tracking under the KL Property
Presenter: Xiaokai Chen

Poster 4: The Nonconvex Riemannian Proximal Gradient Method
Presenter: Paula J. John

Poster 5: A Stochastic Approach to the Subset Selection Problem via Mirror Descent
Presenter: Dan Greenstein

Poster 6: Directional Smoothness and Gradient Methods: Convergence and Adaptivity
Presenter: Aaron Mishkin

Poster 7: The Convex Riemannian Proximal Gradient Method
Presenter: Hajg Jasa

Poster 8: DCatalyst: A Unified Accelerated Framework for Decentralized Optimization
Presenter: Tianyu Cao

Poster 9: Limitations of Iterative Refinement for Enhancing Precision in Linear Optimization
Presenter: Henry Graven

Poster 10: Comparative Study of Sampling-based Multistage Stochastic Linear Programming Algorithms
Presenter: Zhiyuan Zhang

Poster 11: Hamiltonian Simulation of the Quantum Central Path Algorithm
Presenter: Jialu Bao

Poster 12: Interior Point Methods for Sequential Hypothesis Testing via Online Optimization
Presenter: Can Chen

Poster 13: A Stability Principle for Learning under Non-Stationarity
Presenter: Chengpiao Huang

Poster 14: Physics-informed deep learning for certain classes of non-convex and mixed-integer PDE-constrained optimization problems
Presenter: Smajil Halilovic

Poster 15: A Mean-Field Theory for Stochastic Optimization with Decisions Truncated by Random Variables
Presenter: Jie Wang

Poster 16: Tight Ergodic Convergence Rates for Splitting Methods Applied to Saddle Point Problems
Presenter: Edward Nguyen


Speakers
avatar for Jie Wang

Jie Wang

I am an associate professor at AMSS-CAS in Beijing, China since July 2021. I was previously a postdoctoral reseacher working with Prof. Victor Magron and Prof. Jean-Bernard Lasserre at LAAS-CNRS in Toulouse, France from July 2019 to June 2021.My research interests include polynomial... Read More →
XC

Xiaokai Chen

Name: Dr. Slothington "Slow Convergence" McNapfaceTitle: Distinguished Professor of Continuous Optimization & Energy MinimizationAffiliation: The Lush Canopy Institute of Sluggish AlgorithmsBio:Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
avatar for Paula John

Paula John

Master Student, University of Göttingen
Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
avatar for Dan Greenstein

Dan Greenstein

PhD Student, Technion - Israel Institute of Technology
I am a PhD student at the Technion, specializing in nonconvex optimization, often involving stochastic elements in the problem formulation, solution methods, or both. My broader research interests include graph algorithms and discrete optimization.
avatar for Aaron Mishkin

Aaron Mishkin

PhD Student, Stanford University
I am a fifth-year PhD student in the Department of Computer Science at Stanford University. I am fortunate to be supervised by Mert Pilanci. My research interests are in optimization for machine learning. Previously, I completed a master's in computer science at the University o... Read More →
avatar for Hajg Jasa

Hajg Jasa

Ph.D. Candidate, Norwegian University of Science and Technology (NTNU)
Name: Hajg JasaTitle: Ph.D. Candidate in Nonsmooth Riemannian OptimizationAffiliation: Department of Mathematical Sciences (IMF) at NTNU
TC

Tianyu Cao

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
HG

Henry Graven

Name: Dr. Slothington "Slow Convergence" McNapfaceTitle: Distinguished Professor of Continuous Optimization & Energy MinimizationAffiliation: The Lush Canopy Institute of Sluggish AlgorithmsBio:Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
ZZ

Zhiyuan Zhang

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
JB

Jialu Bao

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
avatar for Can Chen

Can Chen

PhD, UC San Diego
Hi! I’m Can Chen, a first-year PhD student in Data Science at the Halıcıoğlu Data Science Institute, University of California San Diego. I’m fortunate to be advised by Prof. Jun-Kun Wang. My research interest lies in online learning and online optimization. Recently, I am working on the Interior Point Methods for Sequential Hypothesis Testing via Online Optimization... Read More →
CH

Chengpiao Huang

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
SH

Smajil Halilovic

PostDoc, Research group leader, Technical University of Munich
My main research interest lies in developing efficient approaches to solve complex optimization problems in the field of energy resources and systems. This particularly includes PDE-constrained optimization problems where physical phenomena are governed by partial differential equations... Read More →
EN

Edward Nguyen

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
Tuesday July 22, 2025 5:45pm - 7:15pm PDT
Olin Hall of Engineering (OHE) Patio 3650 McClintock Ave, Los Angeles, CA 90089
 
Wednesday, July 23
 

6:30pm PDT

Conference banquet (ADVANCED PURCHASE REQUIRED)
Wednesday July 23, 2025 6:30pm - 8:30pm PDT
Wednesday July 23, 2025 6:30pm - 8:30pm PDT
Town & Gown USC 665 W Exposition Blvd, Los Angeles, CA 90089
 
Thursday, July 24
 

8:45am PDT

Last Day Remarks
Thursday July 24, 2025 8:45am - 9:00am PDT
Thursday July 24, 2025 8:45am - 9:00am PDT
USC Bovard Auditorium 3551 Trousdale Pkwy, Los Angeles, CA 90089

5:30pm PDT

End of Conference
Thursday July 24, 2025 5:30pm - 5:30pm PDT
Thursday July 24, 2025 5:30pm - 5:30pm PDT
TBA
 
  • Filter By Date
  • Filter By Venue
  • Filter By Type
  • Timezone


Share Modal

Share this link via

Or copy link

Filter sessions
Apply filters to sessions.