Loading…
Wednesday July 23, 2025 1:15pm - 2:30pm PDT
Session: Learning in PDE-based optimization and control
Chair: Michael Hintermüller
Cluster: PDE-constrained Optimization

Talk 1: Efficient Computational Methods for Wasserstein Natural Gradient Descent
Speaker: Levon Nurbekyan
Abstract: Natural gradient descent (NGD) is a preconditioning technique that incorporates the geometry of the forward model space to accelerate gradient-based optimization techniques in inverse and learning problems. One such relevant geometry is the optimal transportation (OT) or the Wasserstein geometry, which is useful when recovering or learning probability measures. One of the critical challenges in NGD is the preconditioning cost. If performed naively, this cost is particularly taxing for the OT geometry due to the high computational cost of OT distances. In this talk, I'll present an efficient way of performing large-scale NGD with a particular emphasis on OT geometry. The talk is based on a joint work with Yunan Yang (Cornell) and Wanzhou Lei (Brown Grad School).

Talk 2: Derivative-informed neural operators for efficient PDE-constrained optimization under uncertainty
Speaker: Dingcheng Luo
Abstract: We consider the use of derivative-informed neural operators (DINOs) as surrogates for PDE-constrained optimization subject to uncertainty in the model parameters. Optimization under uncertainty (OUU) is often orders of magnitude more expensive compared to its deterministic counterpart due to the need to evaluate statistical/risk measures by stochastic integration, requiring a large number PDE solves. To address this challenge, we propose a neural operator surrogate for the underlying PDE, which is trained on derivatives of the solution operator. This ensures that the neural operator can be used to accurately approximate the cost function and it’s gradient with respect to the optimization variable, thereby improving its fitness for OUU tasks. We present some supporting theoretical results and demonstrate the performance of our method over numerical experiments, showcasing how DINOs can be used to solve OUU problems in a sample efficient manner.

Talk 3: A hybrid physics-informed neural network based multiscale solver as a partial differential equation constrained optimization problem
Speaker: Denis Korolev
Abstract: The physics-informed neural network (PINN) approach relies on approximating the solution to a partial differential equation (PDE) using a neural network by solving an associated non-convex and highly nonlinear optimization task. Despite the challenges of such an ansatz, the optimization-based formulation of PINNs provides rich flexibility and holds great promise for unifying various techniques into monolithic computational frameworks. Inspired by the Liquid Composite Molding process for fiber-reinforced composites and its related multiscale fluid flow structure, we present a novel framework for optimizing PINNs constrained by partial differential equations, with applications to multiscale PDE systems. Our hybrid approach approximates the fine-scale PDE using PINNs, producing a PDE residual-based objective subject to a coarse-scale PDE model parameterized by the fine-scale solution. Multiscale modeling techniques introduce feedback mechanisms that yield scale-bridging coupling, resulting in a non-standard PDE-constrained optimization problem. From a discrete standpoint, the formulation represents a hybrid numerical solver that integrates both neural networks and finite elements, for which we present a numerical algorithm. The latter combines the natural gradient descent technique for optimizing PINNs with the adjoint-state method, resulting in a Newton-type optimization update for our hybrid solver. We further demonstrate that our hybrid formulation enchances the overall modelling and can substantially improve the convergence properties of PINNs in the context of material science applications. The talk is based on a joint work with Michael Hintermüller (Weierstrass Institute, Humboldt University of Berlin).

Speakers
LN

Levon Nurbekyan

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
DL

Dingcheng Luo

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
DK

Denis Korolev

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
Wednesday July 23, 2025 1:15pm - 2:30pm PDT
Taper Hall (THH) 118 3501 Trousdale Pkwy, 118, Los Angeles, CA 90089

Log in to save this to your schedule, view media, leave feedback and see who's attending!

Share Modal

Share this link via

Or copy link