Session: Special Session in Honor of Suvrajeet Sen: Stochastic Mixed-Integer Programming
Chair: Lewis Ntaimo
Cluster: Optimization Under Uncertainty and Data-driven Optimization
Talk 1: On Disjunctive Decomposition for Stochastic Mixed-Integer Programming: A Reflection on Algorithm Development and Applications
Speaker: Lewis Ntaimo
Abstract: Two-stage stochastic mixed-integer programming (SMIP) involves making discrete decisions in the face of future uncertainty and has many applications in science and engineering. However, solving SMIP is very challenging mainly due to its nonconvexity and large-scale nature. In this talk, we reflect on the development of disjunctive decomposition for SMIP initiated by Suvrajeet Sen. Disjuctive decomposition relies on generating disjunctive cutting planes that are shared among scenarios to sequentially convexify the nonconvex expected recourse function. We review the basic theory and derivation of a class of disjunctive decomposition algorithms, and illustrate the algorithms using simple numerical examples. Finally, we discuss computer implementation and application of the algorithms towards solving standard SMIP problems.
Talk 2: An Asymptotically Optimal Coordinate Descent Algorithm for Learning Bayesian Networks from Gaussian Models
Speaker: Simge Kucukyavuz
Abstract: We study the problem of learning Bayesian networks from continuous observational data, generated according to a linear Gaussian structural equation model. We consider an ℓ0-penalized maximum likelihood estimator for this problem which is known to have favorable statistical properties but is computationally challenging to solve, especially for medium-sized Bayesian networks. We propose a new coordinate descent algorithm to approximate this estimator and prove several remarkable properties of our procedure: the algorithm converges to a coordinate-wise minimum, and despite the non-convexity of the loss function, as the sample size tends to infinity, the objective value of the coordinate descent solution converges to the optimal objective value of the ℓ0-penalized maximum likelihood estimator. Finite-sample optimality and statistical consistency guarantees are also established. To the best of our knowledge, our proposal is the first coordinate descent procedure endowed with optimality and statistical guarantees in the context of learning Bayesian networks. Numerical experiments on synthetic and real data demonstrate that our coordinate descent method can obtain near-optimal solutions while being scalable.
Talk 3: A Stochastic Diversion Path Problem
Speaker: Cole Smith
Abstract: We examine a stochastic network optimization problem in which the goal is to modify arc lengths so that a specified path (the “diversion path”) will be optimal with sufficiently high probability. Given modification costs for each arc in the network, the objective in a deterministic form of the problem would be to minimize the sum of modification costs needed to guarantee that the diversion path is optimal. In the stochastic version, each arc length is an independent, uniformly-distributed random variable. (The lower bound on the arc lengths is nonnegative.) Given a parameter 0 < tau