Loading…
Tuesday July 22, 2025 4:15pm - 5:30pm PDT
Session: Special Session in Honor of Suvrajeet Sen: Nonsmooth Methods in Stochastic Programming
Chair: Junyi Liu
Cluster: Optimization Under Uncertainty and Data-driven Optimization

Talk 1: Improving Dimension Dependence in Zeroth-Order Schemes via Exponentially Shifted Gaussian Smoothing
Speaker: Uday Shanbhag
Abstract: Smoothing-enabled zeroth-order (ZO) methods for nonsmooth convex stochastic optimization have assumed increasing relevance. A shortcoming of such schemes is the dimension dependence in the complexity guarantees, a concern that impedes truly large-scale implementations. We develop a novel exponentially- shifted Gaussian smoothing (esGS) gradient estimator by leveraging a simple change-of-variable argument. The moment bounds of the (esGS) estimator are characterized by a muted dependence on dimension. When the (esGS) estimator is incorporated within a ZO framework, the resulting iteration complexity bounds are reduced to O(n\epsilon^{-2}) from O(n^2 \epsilon^{-2}), the latter being the best available for the existing two-point estimator with Gaussian smoothing. This is joint work with Mingrui Wang and Prakash Chakraborty.

Talk 2: Decomposition-based algorithm for infinite horizon stochastic programs on finite-state Markov chains
Speaker: Shuotao Diao
Abstract: Infinite horizon stochastic programs on finite-state Markov chains can be expressed as a semi-contractive model. In this work, we provide a unifying view of the conditions ensuring the existence of fixed points of the infinite horizon stochastic programs. We further develop a decomposition-based method to solve the infinite horizon stochastic programs with convergence guarantee.

Talk 3: Adaptive Sampling-based Nonconvex and Nonsmooth approaches for Stochastic Programs with Implicitly Decision-dependent Uncertainty
Speaker: Junyi Liu
Abstract: We consider a class of stochastic programming problems where the implicitly decision-dependent random variable follows a nonparametric regression model with heteroskedastic error. We develop an adaptive sampling-based algorithm that integrates the simulation scheme and statistical estimates to construct sampling-based surrogate functions in a way that the simulation process is guided by the algorithmic procedure. We establish the nonasymptotic convergence analysis in terms of $(\epsilon, \delta)$-nearly stationarity in expectation under variable proximal parameters and batch sizes that leads to superior convergence rate. Furthermore, we show that the proposed adaptive simulation scheme embedded in the sampling-based algorithm leads to better error control of sampling-based surrogate functions and thus enhance the stability and efficiency of the sampling-based algorithm, which are further evidenced by numerical results.

Speakers
US

Uday Shanbhag

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
SD

Shuotao Diao

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
JL

Junyi Liu

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
Tuesday July 22, 2025 4:15pm - 5:30pm PDT
Taper Hall (THH) 210 3501 Trousdale Pkwy, 210, Los Angeles, CA 90089

Log in to save this to your schedule, view media, leave feedback and see who's attending!

Share Modal

Share this link via

Or copy link