Session: Manifolds, samples, and learning
Chair: Ralf Zimmermann
Cluster: Optimization on Manifolds
Talk 1: Dynamic Subspace Estimation using Grassmannian Geodesics
Speaker: Laura Balzano
Abstract: In this work, we consider recovering a sequence of low-rank matrices from noisy and possibly undersampled measurements, where an underlying subspace varies across samples over time. We propose a Riemannian block majorize-minimization algorithm that constrains the time-varying subspaces as a geodesic along the Grassmann manifold. Our proposed method can faithfully estimate the subspaces at each time point, even when the number of samples at each time point is less than the rank of the subspace. We demonstrate the effectiveness of our algorithm on synthetic data, dynamic fMRI data, video data, and graph data with evolving communities.
Talk 2: Latent Space Manifold Geometry for Robust Image Classification
Speaker: Shenyuan Liang
Abstract: This talk will explore Latent Space Manifold Learning (LSML) — the process of learning image manifolds in latent spaces — for robustifying image classification and improving image denoising. Leveraging a geometry-preserving GAN~\cite{liang2024learning}, we find a mapping $\Phi$ that embeds multi-class training images into a low-dimensional latent space, and estimate class-specific manifolds in that latent space. This $\Phi$ is designed to be isometric on image manifolds, and naturally suppresses extraneous dimensions. We use the discovered latent space to perform k-NN classification of test images and investigate image denoising. The latter involves forward mapping a noisy image into latent space through $\Phi$ and reconstructing it back through the pseudo-inverse $\Phi^{-1}$. We assess this LSML framework under extensive corruption scenarios, including additive noise, blur, patch occlusions, and adversarial distortions. Unlike standard classifiers, whose performance degrades substantially under even low noise, the LSML-based approach remains robust under severe perturbations. We provide theoretical insights into our performance gains using the noise-suppressing properties of $\Phi$. As further evidence, we modify a SOTA classifier (ResNet) by incorporating a geometry-preserving loss term and observe improved robustness. Code and data will be made publicly available after review.
Talk 3: NP-hardness of Grassmannian optimization
Speaker: Zehua Lai
Abstract: We show that unconstrained quadratic optimization over a Grassmannian is NP-hard. Our results cover all scenarios: (i) when k and n are both allowed to grow; (ii) when k is arbitrary but fixed; (iii) when k is fixed at its lowest possible value 1. We then deduce the NP-hardness of unconstrained cubic optimization over the Stiefel manifold and the orthogonal group. As an addendum we demonstrate the NP-hardness of unconstrained quadratic optimization over the Cartan manifold, i.e., the positive definite cone regarded as a Riemannian manifold, another popular example in manifold optimization. We will also establish the nonexistence of FPTAS in all cases.