Loading…
Tuesday July 22, 2025 10:30am - 11:45am PDT
Session: Progress in Nonsmooth Optimization
Chair: Feng Ruan
Cluster: Optimization Under Uncertainty and Data-driven Optimization

Talk 1: Subgradient Convergence Implies Subdifferential Convergence on Weakly Convex Functions: With Uniform Rates Guarantees
Speaker: Feng Ruan
Abstract: In nonsmooth, nonconvex stochastic optimization, understanding the uniform convergence of subdifferential mappings is crucial for analyzing stationary points of sample average approximations of risk as they approach the population risk. Yet, characterizing this convergence remains a fundamental challenge. This work introduces a novel perspective by connecting the uniform convergence of subdifferential mappings to that of subgradient mappings as empirical risk converges to the population risk. We prove that, for stochastic weakly-convex objectives, and within any open set, a uniform bound on the convergence of subgradients -- chosen arbitrarily from the corresponding subdifferential sets -- translates to a uniform bound on the convergence of the subdifferential sets itself, measured by the Hausdorff metric. Using this technique, we derive uniform convergence rates for subdifferential sets of stochastic convex-composite objectives. Our results do not rely on key distributional assumptions in the literature, which require the population and finite sample subdifferentials to be continuous in the Hausdorff metric, yet still provide tight convergence rates. These guarantees lead to new insights into the nonsmooth landscapes of such objectives within finite samples.

Talk 2: Variational Theory and Algorithms for a Class of Asymptotically Approachable Nonconvex Problems
Speaker: Ying Cui
Abstract: We investigate a class of composite nonconvex functions, where the outer function is the sum of univariate extended-real-valued convex functions and the inner function is the limit of difference-of-convex functions. A notable feature of this class is that the inner function can be merely lower semicontinuous instead of continuously differentiable. It covers a range of important yet challenging applications, including the composite value functions of nonlinear programs and the value-at-risk constraints. We propose an asymptotic decomposition of the composite function that guarantees epi-convergence to the original function, leading to necessary optimality conditions for the corresponding minimization problem. The proposed decomposition also enables us to design a numerical algorithm such that any accumulation point of the generated sequence, if exists, satisfies the newly introduced optimality conditions. These results expand on the study of so-called amenable functions introduced by Poliquin and Rockafellar in 1992, which are compositions of convex functions with smooth maps, and the prox-linear methods for their minimization.

Talk 3: Survey Descent: a Case-Study in Amplifying Optimization Research with Modern ML Workflows
Speaker: X.Y. Han
Abstract: Within the classic optimization, one learns that for strongly convex objectives that are smooth, gradient descent ensures linear convergence of iterates and objective values relative to the number of gradient evaluations. Nonsmooth objective functions are more challenging: existing solutions typically invoke cutting plane methods whose complexities are difficult to bound, leading to convergence guarantees that are sublinear in the cumulative number of gradient evaluations. We instead propose a multipoint generalization of the gradient descent called Survey Descent. In this method, one first leverages a one-time initialization procedure to gather a "survey" of points. Then, during each iteration of the method, the survey points are updated in parallel using a simple, four-line procedure inspired by gradient descent. Under certain regularity conditions, we prove that Survey Descent then achieves a desirable performance by converging linearly to the optimal solution in the nonsmooth setting. Despite being an nominally mathematical endeavor, we discuss how the development of Survey Descent was significantly accelerated by a frictionless computational workflow made possible by tools from modern machine learning (ML); how this model of applying new ML workflows to solve open questions in optimization and applied probability could amplify the researchers' productivity; practical computational bottlenecks that could hinder this integration; and what tools are needed to overcome those obstacles.

Speakers
FR

Feng Ruan

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
YC

Ying Cui

Name: Dr. Slothington "Slow Convergence" McNapfaceTitle: Distinguished Professor of Continuous Optimization & Energy MinimizationAffiliation: The Lush Canopy Institute of Sluggish AlgorithmsBio:Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
avatar for X.Y. Han

X.Y. Han

Assistant Professor, Chicago Booth
Name: Prof. X.Y. HanTitle: Assistant Professor of Operations Management and Applied AIAffiliation: Chicago BoothBio:X.Y. Han is an assistant professor of Operations Management and Applied Artificial Intelligence at the University of Chicago, Booth School of Business. His research... Read More →
Tuesday July 22, 2025 10:30am - 11:45am PDT
Taper Hall (THH) 101 3501 Trousdale Pkwy, 101, Los Angeles, CA 90089

Log in to save this to your schedule, view media, leave feedback and see who's attending!

Share Modal

Share this link via

Or copy link