Loading…
Venue: Taper Hall (THH) 118 clear filter
arrow_back View All Dates
Tuesday, July 22
 

10:30am PDT

Parallel Sessions 4K: Convex approaches to SDP
Session: Convex approaches to SDP
Chair: Richard Zhang
Cluster: Conic and Semidefinite Optimization

Talk 1: Semidefinite Relaxations for the Gromov-Wasserstein Problem
Speaker: Yong Sheng Soh
Abstract: The Gromov-Wasserstein (GW) problem is an extension of the classical optimal transport problem to settings where the source and target distributions reside in incomparable spaces, and for which a cost function that attributes the price of moving resources is not available. The GW problem is a non-convex quadratic program and is intractable to solve in general. The sum-of-squares (SOS) hierarchy is a principled approach for deriving tractable semidefinite relaxations to generic polynomial optimization problems. In this talk, we describe a moment-SOS hierarchy for solving the GW problem. We discuss convergence rates as well as sample complexity rates that arise from sampling. We show that these hierarchies define pseudo-metrics over the space of (metric measure-) spaces. Last, we describe extensions of these relaxations to continuous measures. In particular, our work hints at applying semidefinite relaxations for general optimization problems over distributions that depend on the decision variable in a polynomial way.

Talk 2: Iterative methods for primal-dual scalings in conic optimization
Speaker: Lieven Vandenberghe
Abstract: A central element in the design of primal-dual interior-point methods for conic optimization is the definition of a suitable primal-dual scaling or metric. The talk will discuss simple iterative methods for computing primal-dual scalings. We will consider the Nesterov-Todd scaling for symmetric cones and extensions to sparse positive semidefinite matrix cones.

Talk 3: Generalized Cuts and Grothendieck Covers: a Primal-Dual Approximation Framework Extending the Goemans--Williamson Algorithm
Speaker: Nathan Benedetto Proenca
Abstract: We provide a primal-dual framework for randomized approximation algorithms utilizing semidefinite programming (SDP) relaxations. Our framework pairs a continuum of APX-complete problems including MaxCut, Max2Sat, MaxDicut, and more generally, Max-Boolean Constraint Satisfaction and MaxQ (maximization of a positive semidefinite quadratic form over the hypercube) with new APX-complete problems which are stated as convex optimization problems with exponentially many variables. These new dual counterparts, based on what we call Grothendieck covers, range from fractional cut covering problems (for MaxCut) to tensor sign covering problems (for MaxQ). For each of these problem pairs, our framework transforms the randomized approximation algorithms with the best known approximation factors for the primal problems to randomized approximation algorithms for their dual counterparts with reciprocal approximation factors which are tight with respect to the Unique Games Conjecture. For each APX-complete pair, our algorithms solve a single SDP relaxation and generate feasible solutions for both problems which also provide approximate optimality certificates for each other. Our work utilizes techniques from areas of randomized approximation algorithms, convex optimization, spectral sparsification, as well as Chernoff-type concentration results for random matrices.

Speakers
YS

Yong Sheng Soh

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
RZ

Richard Zhang

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
NB

Nathan Benedetto Proenca

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
Tuesday July 22, 2025 10:30am - 11:45am PDT
Taper Hall (THH) 118 3501 Trousdale Pkwy, 118, Los Angeles, CA 90089

1:15pm PDT

Parallel Sessions 5K: First-order methods for nonsmooth and constrained optimization problems
Session: First-order methods for nonsmooth and constrained optimization problems
Chair: Digvijay Boob
Cluster: Nonlinear Optimization

Talk 1: Linearly Convergent Algorithms for Nonsmooth Problems with Unknown Smooth Pieces
Speaker: Zhe Zhang
Abstract: Non-smoothness is a major bottleneck to efficient optimization. In the absence of smoothness, the theoretical convergence rates drop from linear to sublinear for convex programs, and become orders of magnitude worse for nonconvex programs. Moreover, this huge is known to be unimprovable in general. We focus on mild, structured non-smooth problems: piecewise smooth (PWS) functions whose domain can be partitioned into subsets such that restricted to each subset the function is smooth. PWS functions cover ReLU functions, L1-penalties, and min-max saddle point problems as special cases. In particular, we study convex PWS functions for which we present globally linear convergent methods under the quadratic growth condition; as a corollary, we improve the iteration complexity for solving weakly convex PWS problems by orders of magnitude. Importantly, our method does not require any knowledge about individual smooth pieces, and is thus applicable even to general non-smooth programs exhibiting local PWS structure.

Talk 2: Primal-Dual Algorithm with Last iterate Convergence Guarantees for Stochastic Convex Optimization Problems
Speaker: Mohammad Khalafi
Abstract: We provide the first method that obtains the best-known convergence rate guarantees on the last iterate for stochastic composite nonsmooth convex function-constrained optimization problems. Our novel and easy-to-implement algorithm is based on the augmented Lagrangian technique and uses a new linearized approximation of constraint functions, leading to its name, the Augmented Constraint Extrapolation (Aug-ConEx) method. We show that Aug-ConEx achieves convergence rate in the nonsmooth stochastic setting without any strong convexity assumption and for the same problem with strongly convex objective function. While optimal for nonsmooth and stochastic problems, the Aug-ConEx method also accelerates convergence in terms of Lipschitz smoothness constants to and in the aforementioned cases, respectively. To our best knowledge, this is the first method to obtain such differentiated convergence rate guarantees on the last iterate for a composite nonsmooth stochastic setting without additional factors. We validate the efficiency of our algorithm by comparing it with a state-of-the-art algorithm through numerical experiments.

Talk 3: An Optimal Method for Minimizing Heterogeneously Smooth and Convex Compositions
Speaker: Aaron Zoll
Abstract: This talk will discuss a universal, optimal algorithm for convex minimization problems of the composite form f(x) + h(g_1(x), ..., g_m(x)). We allow each component function g_i(x) to independently range from being nonsmooth Lipschitz to smooth and from convex to strongly convex, described by notions of Holder continuous gradients and uniform convexity. We note that although the objective is built from a heterogeneous combination of components, it does not necessarily possess any smoothness, Lipschitzness, or any favorable structural properties. Regardless, our proposed sliding accelerated gradient method converges at least as fast as the optimal rate guarantees in terms of oracle access to (sub)gradients of each g_i seperately. Furthermore, given access to an estimate of the initial distance to optimal, we provide a “mostly parameter-free” variant. As a key application, fixing h as a nonpositive indicator function, this model readily captures functionally constrained minimization f(x) subject to g_i(x) \leq 0. Our algorithm and analysis are directly inspired by the Q-analysis technique developed for such smooth constrained minimization by Zhang and Lan. Our theory recovers their accelerated guarantees and extends them to benefit from heterogeneously smooth and convex constraints.

Speakers
MK

Mohammad Khalafi

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
DB

Digvijay Boob

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
AZ

Aaron Zoll

I am a PhD student at The Johns Hopkins University in the department of Applied Math and Statistics. My work primarily focuses in optimization, under the supervision of Benjamin Grimmer. In particular, I work on expanding classical theory to apply universally to Lipschitz functions, functions with Lipschitz gradients, and everywhere in between (characterized by Holder continuous gradient). In a dual notion, my work also focuses on functions that are convex, strongly conv... Read More →
Tuesday July 22, 2025 1:15pm - 2:30pm PDT
Taper Hall (THH) 118 3501 Trousdale Pkwy, 118, Los Angeles, CA 90089

4:15pm PDT

Parallel Sessions 6K: Applications To Signal Processing
Session: Applications To Signal Processing
Chair: Robert Bassett
Cluster: Optimization Applications (Communication, Energy, Health, ML, ...)

Talk 1: Stochastic Optimal Search with Signals
Speaker: Jefferson Huang
Abstract: We consider the problem of optimally searching for a target located at one of the nodes of a network. On each time step, the searcher receives a (possibly unreliable) signal indicating where the target is on the network. Starting with the case of searching on a line, we explore the structure of search policies that optimally make use of the signals. 

Talk 2: Sparse signal reconstruction for over-dispersed low-photon count imaging
Speaker: Roummel Marcia
Abstract: Low-photon count imaging is typically modeled by Poisson statistics. This discrete probability distribution model assumes that the mean and variance of a signal are equal. In the presence of greater variability in a dataset than what is expected, the negative binomial distribution is a suitable over-dispersed alternative to the Poisson distribution. In this work, we present an optimization framework for reconstructing sparse signals in these over-dispersed low-count settings.

Talk 3: Decentralized Sensor Network Localization via Customized Proximal Splitting
Speaker: Peter Barkley
Abstract: We apply recent advances in the design of custom proximal splitting algorithms to build a decentralized algorithm for solving the node-based SDP relaxation of the sensor network localization problem using noisy distance data. We implement the algorithm using MPI. We then explore the performance of various algorithm design options, and compare the decentralized algorithm with other decentralized approaches, including a graph-based decentralized ADMM.

Speakers
JH

Jefferson Huang

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
RM

Roummel Marcia

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
avatar for Peter Barkley

Peter Barkley

US Navy
I am a naval officer and a PhD student in Operations Research at the Naval Postgraduate School. My research interests include decentralized optimization, scheduling, and machine learning. In previous roles in the Navy, I’ve flown the P-3C Orion as a flight instructor and mission... Read More →
RB

Robert Bassett

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
Tuesday July 22, 2025 4:15pm - 5:30pm PDT
Taper Hall (THH) 118 3501 Trousdale Pkwy, 118, Los Angeles, CA 90089
 
  • Filter By Date
  • Filter By Venue
  • Filter By Type
  • Timezone


Share Modal

Share this link via

Or copy link

Filter sessions
Apply filters to sessions.
Filtered by Date -