Session: Recent advances in algorithms for large-scale optimization (I)
Chair: Xudong Li
Cluster: Computational Software
Talk 1: Infeasible Model Analysis In the Optverse Solver
Speaker: Zirui Zhou
Abstract: Isolating an Irreducible Infeasible Subset (IIS) of constraints is the best way to analyze an infeasible optimization model. The OptVerse solver incorporates very fast algorithms for this purpose. The LP analyzer takes advantage of the presolver to isolate a small subset of constraints for conventional analysis, whether or not the presolve detects infeasibility. The MIP analyzer uses new techniques to very quickly find an IIS that includes the integrality restrictions. Experimental results are given.
Talk 2: HOT: An Efficient Halpern Accelerating Algorithm for Optimal Transport Problems
Speaker: Yancheng Yuan
Abstract: This talk introduces an efficient HOT algorithm for solving the optimal transport (OT) problems with finite supports. We particularly focus on an efficient implementation of the HOT algorithm for the case where the supports are in $\mathbb{R}^2$ with ground distances calculated by $L_2^2$-norm. Specifically, we design a Halpern accelerating algorithm to solve the equivalent reduced model of the discrete OT problem. Moreover, we derive a novel procedure to solve the involved linear systems in the HOT algorithm in linear time complexity. Consequently, we can obtain an $\varepsilon$-approximate solution to the optimal transport problem with $M$ supports in $O(M^{1.5}/\varepsilon)$ flops, which significantly improves the best-known computational complexity. We further propose an efficient procedure to recover an optimal transport plan for the original OT problem based on a solution to the reduced model, thereby overcoming the limitations of the reduced OT model in applications that require the transport map. We implement the HOT algorithm in PyTorch and extensive numerical results show the superior performance of the HOT algorithm compared to existing state-of-the-art algorithms for solving the OT problems.
Talk 3: DNNLasso: Scalable Graph Learning for Matrix-Variate Data
Speaker: Meixia Lin
Abstract: We consider the problem of jointly learning row-wise and column-wise dependencies of matrix-variate observations, which are modelled separately by two precision matrices. Due to the complicated structure of Kronecker-product precision matrices in the commonly used matrix-variate Gaussian graphical models, a sparser Kronecker-sum structure was proposed recently based on the Cartesian product of graphs. However, existing methods for estimating Kronecker-sum structured precision matrices do not scale well to large scale datasets. In this work, we introduce DNNLasso, a diagonally non-negative graphical lasso model for estimating the Kronecker-sum structured precision matrix, which outperforms the state-of-the-art methods by a large margin in both accuracy and computational time.