Loading…
Monday July 21, 2025 1:15pm - 2:30pm PDT
Session: Techniques of PDE Optimization in Machine Learning
Chair: Anton Schiela
Cluster: PDE-constrained Optimization

Talk 1: SA-NODEs and the Universal Approximation of Dynamical Systems"
Speaker: Lorenzo Liverani
Abstract: In this talk, I will introduce the framework of semi-autonomous neural ordinary differential equations (SA-NODEs), a variation of vanilla NODEs employing fewer parameters. This is achieved by making the coefficients of the SA-NODEs independent of time. Despite this apparent simplification, I will demonstrate that SA-NODEs retain all the strong approximation properties of Vanilla NODEs, both from a theoretical and a numerical perspective. Specifically, SA-NODEs are able to learn the global flow of a dynamical system and track the entire trajectory over a finite (but arbitrary) time horizon. I will conclude the talk by presenting several numerical experiments, showing that SA-NODEs perform well for various systems and significantly outperform vanilla NODEs. This is joint work with Z. Li, K. Liu, and E. Zuazua.

Talk 2: Preconditioned Gradient Methods for Optimizing Neural Networks with Hilbert Space Layers
Speaker: Frederik Koehne
Abstract: Optimization problems in the context of machine learning typically involve optimization variables that are operators between Hilbert Spaces. In gradient based methods, selecting an appropriate inner product on this space of linear operators is fundamental to obtain meaningful search directions. We review the natural inner product on the space of Hilbert Schmidt operators and demonstrate its efficient application in computing gradients for the transition matrices in artificial neural networks. This approach ensures that structural information from network layers is incorporated into optimization updates. We present the theoretical foundations, discretization details, and numerical results, confirming that the solutions obtained retain the expected structural properties.

Talk 3: ProxSTORM: A Stochastic Trust Region Algorithm for Nonsmooth Optimization
Speaker: Aurya Javeed
Abstract: This talk is about minimizing a smooth term plus a convex nonsmooth term. We present a stochastic proximal Newton trust region algorithm that assumes models and estimates of the objective are sufficiently accurate, sufficiently often. Like STORM (work on stochastic optimization with random models), we use facts about martingales to prove our algorithm is globally convergent with probability one.

Speakers
AS

Anton Schiela

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
LL

Lorenzo Liverani

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
FK

Frederik Koehne

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
AJ

Aurya Javeed

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
Monday July 21, 2025 1:15pm - 2:30pm PDT
Taper Hall (THH) 106 3501 Trousdale Pkwy, 106, Los Angeles, CA 90089

Log in to save this to your schedule, view media, leave feedback and see who's attending!

Share Modal

Share this link via

Or copy link