Loading…
Session: The Carlsson Convergence
Chair: John Carlsson

Talk 1: Computing the alpha complex using dual active set quadratic programming
Speaker: Erik Carlsson
Abstract: We present a dual active set quadratic programming approach for computing the alpha complex without first computing the full Delaunay triangulation, which becomes intractable in high dimensions. Our method formulates each simplex inclusion as a constrained quadratic optimization problem and uses Lagrangian duality to efficiently rule out simplices early when dual feasible points exceed the radius threshold. The dual approach provides natural termination certificates for infeasible simplices. This "rule-out" strategy is particularly effective for high-dimensional sparse alpha complexes, enabling computation on high-dimensional datasets where traditional Delaunay-based methods fail. We demonstrate applications to molecular conformations, image patches, and topological data analysis, showing significant computational advantages over existing methods in high-dimensional settings.

Talk 2: Topological principles for neural network design and optimization
Speaker: Gunnar Carlsson
Abstract: We present a unified framework that leverages topological and geometric principles to both construct and optimize neural network architectures. Our approach addresses two fundamental aspects of deep learning: the design of network structures that respect the intrinsic geometry of feature spaces, and the characterization of weight space topology that governs optimization dynamics. We demonstrate that neural networks can be systematically constructed using metric and graph-based information from data manifolds, while simultaneously showing that the topological complexity of the resulting weight spaces directly influences both convergence rates and generalization performance. Through theoretical analysis and empirical validation, we establish that networks designed with geometric awareness of their feature spaces naturally exhibit simpler topological structures in their weight spaces, leading to superior optimization behavior and enhanced generalization across diverse datasets. This work provides a mathematical foundation for understanding the deep connection between data geometry, network architecture, and learning dynamics, offering principled guidelines for designing more effective neural network systems.

Talk 3: Topology and local optima in computer vision
Speaker: John Carlsson
Abstract: We present a topological approach to analyzing the non-convex optimization problem of image correspondence problems in stereo vision. The correspondence problem minimizes a cost function measuring pixel intensity differences but suffers from multiple local minima caused by repeated patterns and occlusions. Rather than smoothing away local optima, we introduce a filtered simplicial complex using persistent homology to classify them, where nontrivial homology classes correspond to sources of non-convexity in the cost function. Long bars in persistence diagrams indicate robust correspondence solutions representing global or near-global optima, while short bars correspond to spurious local minima. We demonstrate on practical image pairs how topological features correlate with optimization challenges, providing both theoretical understanding of the landscape structure and practical guidance for initialization and convergence assessment.
Speakers
GC

Gunnar Carlsson

BluelightAI; Stanford emeritus
JC

John Carlsson

University of Southern California
Monday July 21, 2025 4:15pm - 5:30pm PDT
Joseph Medicine Crow Center for International and Public Affairs (DMC) 200 3518 Trousdale Pkwy, 200, Los Angeles, CA 90089

Attendees (1)


Log in to save this to your schedule, view media, leave feedback and see who's attending!

Share Modal

Share this link via

Or copy link