Session: Multi-Stage Stochastic Programming
Chair: Yifan Hu
Cluster: Optimization Under Uncertainty and Data-driven Optimization
Talk 1: Polynomial Complexity of Multi-Stage Stochastic Programming with Stagewise Dependent Randomness
Speaker: Yifan Hu
Abstract: We revisit the empirical approximation of multi-stage stochastic programming studied in Shapiro, 2006. For 3-stage problems, the previous results obtain an O(epsilon^{-4}) complexity bound to obtain an epsilon approximation. It was thus inferred that the sample complexity of T-stage problem admits an exponential dependence on T, i.e., O(epsilon^{-2T+2}). Such a conjecture forms a long-standing belief that multi-stage stochastic programming suffers from the curse of dimensionality. In this work, we construct a novel method for 3-stage problem that achieves an O(epsilon^{-2}) complexity bound, indicating that the complexity bounds for T-stage problem can be improved. We further construct a novel empirical approximation of T-stage problem and establish a polynomial complexity bound in terms of 1/epsilon.
Talk 2: Sample Complexity of Data-driven Multistage Stochastic Programming under Stagewise Dependent Uncertainty
Speaker: Hyuk Park
Abstract: This work addresses the challenges of applying the Sample Average Approximation (SAA) method to multistage stochastic programming under a stagewise-dependent data process. While SAA is commonly used in static and two-stage stochastic optimization, it becomes computationally intractable in general multistage settings as the time horizon $T$ increases, resulting in an exponential growth of approximation error, known as the curse of dimensionality in the time horizon. To overcome this limitation, we introduce a novel data-driven approach, the Markov Recombining Scenario Tree (MRST) method. MRST constructs an approximate problem using only two independent trajectories of historical data. We show that our method achieves polynomial sample complexity, providing a more efficient data-driven alternative to SAA. Numerical experiments on the Linear Quadratic Regulator (LQR) problem demonstrate that MRST outperforms SAA, successfully mitigating the curse of dimensionality.
Talk 3: Dual dynamic programming for stochastic programs over an infinite horizon
Speaker: Caleb Ju
Abstract: We consider a dual dynamic programming algorithm for solving stochastic programs over an infinite horizon. We show non-asymptotic convergence results when using an explorative strategy, and we then enhance this result by reducing the dependence of the effective planning horizon from quadratic to linear. This improvement is achieved by combining the forward and backward phases from dual dynamic programming into a single iteration. We then apply our algorithms to a class of problems called hierarchical stationary stochastic programs, where the cost function is a stochastic multi-stage program. The hierarchical program can model problems with a hierarchy of decision-making, e.g., how long-term decisions influence day-to-day operations. We show that when the subproblems are solved inexactly via a dynamic stochastic approximation-type method, the resulting hierarchical dual dynamic programming can find approximately optimal solutions in finite time. Preliminary numerical results show the practical benefits of using the explorative strategy for solving the Brazilian hydro-thermal planning problem and economic dispatch, as well as the potential to exploit parallel computing.