Session: Mixed-Integer Programming-Based Techniques for the Global Optimization of MINLPs
Chair: Rohit Kannan
Cluster: Interplay Between Continuous and Discrete Optimization
Talk 1: Discrete nonlinear functions: formulations and applications
Speaker: Taotao He
Abstract: This paper examines nonlinear optimization problems that incorporate discrete decisions. We introduce new improved formulation techniques that take advantage of the simplotope structure present in the domain of the binarization variables. Our technique identifies new polynomially solvable instances for price promotion problem initially studied by Cohen et al. (2021) and allows us to develop a linear programming (LP) formulation for inventory optimization problem under a choice model proposed by Boada-Collado and Martinez-de Albeniz (2020). The techniques we develop rely on ideal formulations for submodular and fractional compositions of discrete functions, improving prior formulations for bilinear products suggested by Adams and Henry (2012). Submodular compositions also generalize L natural functions over bounded domains and our construction provides new insights into Lovasz-extension based formulations for this class of problems while expanding the class of nonlinear discrete optimization problems amenable to linear programming based techniques.
Talk 2: Quadratic Cuts for Non-Convex MINLP in Practice
Speaker: Adrian Göß
Abstract: It is only half the job to find a good solution for a mathematical optimization problem, as one needs to verify its quality by specifying a dual bound. When it comes to mixed-integer nonlinear programming (MINLP), strong prerequisites such as constraint qualifications appear suitable, but may be difficult to verify computationally. In practice, solvers apply local refinement or convexification strategies to retrieve tight dual bounds. However, these concepts require appropriate big-M formulations, generate new sub-problems, or struggle to represent non-convex characteristics in terms of high accuracy, all of which can lead to long running times. As an alternative, we aim to leverage recent advances in mixed-integer quadratically-constrained programming (MIQCP) and propose a global approximation of constraint functions by paraboloids, \ie, univariate quadratic terms. The approximation is retrieved as a solution to a mixed-integer linear programming (MIP) problem. Further, for each nonlinear constraint function, we solve such MIPs and determine small numbers of paraboloids approximating it from either side. A replacement of the nonlinearities with the corresponding quadratic functions leads to a quadratically-constrained relaxation of the original problem. Solving the MIQCP relaxation then leads to a dual bound whose tightness depends on the approximation guarantee of the paraboloids. In summary, this approach enables solvers that are explicitly tailored for quadratic constraints to solve MINLPs to global optimality.
Talk 3: Learning to Accelerate the Global Optimization of QCQPs: Strong Partitioning and an End-to-End Graph ML-Based Approach
Speaker: Rohit Kannan
Abstract: We learn optimal instance-specific heuristics for the global minimization of nonconvex quadratically-constrained quadratic programs (QCQPs). Specifically, we consider partitioning-based convex mixed-integer programming relaxations for nonconvex QCQPs and propose the novel problem of strong partitioning, which aims to optimally partition variable domains without sacrificing global optimality. Given that solving this max-min strong partitioning problem exactly can be highly challenging, we design a local optimization method that leverages the generalized gradients of the value function in the inner minimization problem. However, solving the strong partitioning problem to local optimality can still be computationally expensive. To address this, we propose a simple AdaBoost regression-based machine learning (ML) approximation for homogeneous families of QCQPs. We conduct a detailed computational study on randomly generated QCQP families, including instances of the pooling problem, using the open-source global solver Alpine. Numerical experiments demonstrate that our AdaBoost regression-based approximation of strong partitioning reduces Alpine’s solution time by a factor of 2 to 4.5 on average, with maximum reduction factors ranging from 10 to 200 across different QCQP families. Additionally, we design a novel end-to-end loss function to train a neural network-based policy that directly maps QCQP instance features to corresponding partitioning points, eliminating the need for an expert strong partitioning policy. We also demonstrate how to apply the graph scattering transform to train a single neural network model that generalizes across QCQP families of various sizes. Our results show that this graph scattering-based end-to-end neural network learns a partitioning policy that significantly outperforms the AdaBoost-based approach and generalizes well to larger QCQPs beyond those encountered in training.