Session: Theoretical and computational advances in nonconvex optimization via convex relaxations and distributionally robust optimization
Chair: E. Alper Yildirim
Cluster: Global Optimization
Talk 1: Semidefinite liftings for the complex cut polytope
Speaker: Lennart Sinjorgo
Abstract: We consider the complex cut polytope: the convex hull of Hermitian rank-one matrices xx*, where the elements of the n-dimensional vector x are complex m-th unit roots. These polytopes find applications in MAX-3-CUT, digital communication, and more generally, complex quadratic programming. For m = 2, the complex cut polytope corresponds to the well-known real cut polytope. We provide an exact description of the complex cut polytope for m = n = 3 and investigate second order semidefinite liftings of the complex cut polytope. For such second order liftings, we show a method for reducing the size of the matrix, without weakening the approximation. We support our theoretical findings with numerical experiments.
Talk 2: Novel and Tractable Convex Relaxations of Standard Quadratic Optimization Problems under Cardinality Constraints
Speaker: E. Alper Yildirim
Abstract: Standard quadratic optimization problems (StQPs) provide a versatile modelling tool in a multitude of applications such as mathematical finance, machine learning (clustering) and modelling in biosciences (e.g. selection and ecology). In this talk, we consider StQPs under an additional cardinality (sparsity) constraint which, even for convex objectives, renders NP-hard problems. One motivation to study StQPs under such sparsity restrictions is the high-dimensional portfolio selection problem with too many assets to handle, in particular in the presence of transaction costs. We present novel computational approaches to this relevant but difficult problem, involving modern conic optimization techniques, along with significant dimensional reduction, which is essential for tractability of these methods when problem size grows. In addition, we propose a particular generation procedure that systematically avoids too easy instances. We present extensive computational results demonstrating the versatility and strength of the proposed relaxations.
Talk 3: Distributionally robust standard quadratic optimization with Wasserstein ambiguity
Speaker: Daniel de Vicente
Abstract: The standard quadratic optimization problem (StQP) consists of minimizing a quadratic form over the standard simplex. If the quadratic form is neither convex nor concave, the StQP is NP-hard. This problem has many interesting applications ranging from portfolio optimization to machine learning. Sometimes, the data matrix is uncertain but some information about its distribution can be inferred, e.g. the first two moments or else a reference distribution (typically, the empirical distribution after sampling). In distributionally robust optimization, the goal is to minimize over all possible distributions in an ambiguity set defined based upon above mentioned characteristics. We will explore two versions: the distributionally robust stochastic StQP focussing on expectations, and the distributionally robust chance constrained StQP, both with an ambiguity set based upon maximal Wasserstein distance to the sampled distribution.