Session: Optimization on Riemannian manifolds and stratified sets (1/2)
Chair: Guillaume Olikier
Cluster: Optimization on Manifolds
Talk 1: First-order optimization on stratified sets
Speaker: Guillaume Olikier
Abstract: This talk considers the problem of minimizing a differentiable function with locally Lipschitz continuous gradient on a closed subset of a Euclidean vector space that can be partitioned into finitely many smooth submanifolds. The partition is called a stratification and the submanifolds are called the strata. Under suitable assumptions on the stratification, first-order methods are proposed that generate a sequence in the set whose accumulation points are guaranteed to be Bouligand stationary. These methods combine retracted line searches along descent directions selected in the Bouligand tangent cone and projections onto the strata. Examples of a set satisfying the assumptions include the algebraic variety of real matrices of upper-bounded rank and its intersection with the cone of symmetric positive-semidefinite matrices. On these sets, Bouligand stationarity is the strongest necessary condition for local optimality.
Talk 2: The Effect of Smooth Parametrizations on Nonconvex Optimization Landscapes
Speaker: Joe Kileel
Abstract: Given a constrained optimization problem, we often tackle it by choosing a parameterization of the constraint set and then optimizing over the parameters. This lets us to approach optimization problems over manifolds or stratified sets through problems over simpler sets, such as Euclidean space without constraints. For example, if we need to optimize a real-valued cost over bounded-rank matrices, then we can parameterize the domain using a low-rank factorization (e.g., the SVD) and then optimize over the factors which are unconstrained. Alternatively, in deep learning when optimizing over the function space represented by a neural network, we have parameterized the space by the weights and biases in the network. In these situations, a natural question is: does the choice of parameterization affect the nonconvex optimization landscape? And: are some parameterizations “better” than others? In this talk, I'll present a geometric framework to formalize such questions and new analysis tools to help answer them. The theory will be applied to several examples, including the aforementioned ones as well as optimization problems from tensor decomposition and semidefinite programming. Based on joint works with Eitan Levin (Caltech), Nicolas Boumal (EPFL), and Chris Criscitiello (EPFL).
Talk 3: Geodesic convexity of polar decomposition
Speaker: Foivos Alimisis
Abstract: In this talk, we will analyze a hidden convexity-like structure for the polar decomposition problem in the orthogonal group. This turns out to be similar to convexity-like structures that have been discovered for the symmetric eigenvalue problem. We will talk about the theoretical, but as well practical implications of this structure in polar decomposition problems under uncertainty.