Loading…
Thursday July 24, 2025 1:15pm - 2:30pm PDT
Session: Distributionally Robust Optimization (DRO) I
Chair: Man Yiu Tsang
Cluster: nan

Talk 1: On the trade-off between distributional belief and ambiguity: Conservatism, finite-sample guarantees, and asymptotic properties
Speaker: Man Yiu Tsang
Abstract: We present a new data-driven trade-off (TRO) approach for modeling uncertainty that serves as a middle ground between the optimistic approach, which adopts a distributional belief, and the pessimistic distributionally robust optimization approach, which hedges against distributional ambiguity. We equip the TRO model with a TRO ambiguity set characterized by a size parameter controlling the level of optimism and a shape parameter representing distributional ambiguity. We first show that constructing the TRO ambiguity set using a general star-shaped shape parameter with the empirical distribution as its star center is necessary and sufficient to guarantee the hierarchical structure of the sequence of TRO ambiguity sets. Then, we analyze the properties of the TRO model, including quantifying conservatism, quantifying bias and generalization error, and establishing asymptotic properties. Specifically, we show that the TRO model could generate a spectrum of decisions, ranging from optimistic to conservative decisions. Additionally, we show that it could produce an unbiased estimator of the true optimal value. Furthermore, we establish the almost-sure convergence of the optimal value and the set of optimal solutions of the TRO model to their true counterparts. We exemplify our theoretical results using stylized optimization problems.

Talk 2: Generalization Bound Analysis of Nonconvex Minimax Optimization and Beyond
Speaker: Siqi Zhang
Abstract: In this work, we systematically investigate the generalization bounds of algorithms that solve nonconvex–(strongly)–concave (NC-SC/NC-C) stochastic minimax optimization, measured by the stationarity of primal functions. We first establish algorithm-agnostic generalization bounds via uniform convergence between the empirical and population minimax problems, thereby deriving sample complexities for achieving generalization. We then explore algorithm-dependent generalization bounds using algorithmic stability arguments. In particular, we introduce a novel stability notion for minimax problems and build its connection to generalization bounds. Consequently, we establish algorithm-dependent generalization bounds for stochastic gradient descent ascent (SGDA) and more general sampling-based algorithms. We will also discuss some extensions of these results to more general settings.

Talk 3: Optimized Dimensionality Reduction for Moment-based Distributionally Robust Optimization
Speaker: Kai Pan
Abstract: Moment-based distributionally robust optimization (DRO) provides an optimization framework to integrate statistical information with traditional optimization approaches. Under this framework, one assumes that the underlying joint distribution of random parameters runs in a distributional ambiguity set constructed by moment information and makes decisions against the worst-case distribution within the set. Although most moment-based DRO problems can be reformulated as semidefinite programming (SDP) problems that can be solved in polynomial time, solving high-dimensional SDPs is still time-consuming. Unlike existing approximation approaches that first reduce the dimensionality of random parameters and then solve the approximated SDPs, we propose an optimized dimensionality reduction (ODR) approach by integrating the dimensionality reduction of random parameters with the subsequent optimization problems. Such integration enables two outer and one inner approximations of the original problem, all of which are low-dimensional SDPs that can be solved efficiently, providing two lower bounds and one upper bound correspondingly. More importantly, these approximations can theoretically achieve the optimal value of the original high-dimensional SDPs. As these approximations are nonconvex SDPs, we develop modified Alternating Direction Method of Multipliers (ADMM) algorithms to solve them efficiently. We demonstrate the effectiveness of our proposed ODR approach and algorithm in solving multiproduct newsvendor and production-transportation problems. Numerical results show significant advantages of our approach regarding computational time and solution quality over the three best possible benchmark approaches. Our approach can obtain an optimal or near-optimal (mostly within 0.1%) solution and reduce the computational time by up to three orders of magnitude. Paper reference: Jiang, S., Cheng, J., Pan, K., & Shen, Z. J. M. (2023). Optimized dimensionality reduction for moment-based distributionally robust optimization. arXiv preprint arXiv:2305.03996.

Speakers
MY

Man Yiu Tsang

Name: Man Yiu (Tim) TsangTitle: Ph.D. CandidateAffiliation: Lehigh UniversityBio:Man Yiu (Tim) Tsang is a Ph.D. candidate in Industrial and Systems Engineering at Lehigh University under the supervision of Prof. Karmel S. Shehadeh. He obtained his BSc and MPhil in Risk Management... Read More →
SZ

Siqi Zhang

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
avatar for Kai Pan

Kai Pan

Associate Professor, The Hong Kong Polytechnic University
Kai Pan is currently an Associate Professor in Operations Management at the Faculty of Business of The Hong Kong Polytechnic University (PolyU) and the Director of the MSc Program in Operations Management (MScOM). He serves as a Secretary/Treasurer for the INFORMS Computing Socie... Read More →
Thursday July 24, 2025 1:15pm - 2:30pm PDT
Taper Hall (THH) 112 3501 Trousdale Pkwy, 112, Los Angeles, CA 90089

Log in to save this to your schedule, view media, leave feedback and see who's attending!

Share Modal

Share this link via

Or copy link