Loading…
Monday July 21, 2025 4:15pm - 5:30pm PDT
Session: Bilevel Optimization for Inverse Problems Part 1
Chair: Juan Carlos de los Reyes
Cluster: PDE-constrained Optimization

Talk 1: A descent algorithm for the optimal control of ReLU neural network informed PDEs based on approximate directional derivatives
Speaker: Michael Hintermuller
Abstract: We propose and analyze a numerical algorithm for solving a class of optimal control problems for learning-informed semilinear partial differential equations. The latter is a class of PDEs with constituents that are in principle unknown and are approximated by nonsmooth ReLU neural networks. We first show that a direct smoothing of the ReLU network with the aim to make use of classical numerical solvers can have certain disadvantages, namely potentially introducing multiple solutions for the corresponding state equation. This motivates us to devise a numerical algorithm that treats directly the nonsmooth optimal control problem, by employing a descent algorithm inspired by a bundle-free method. Several numerical examples are provided and the efficiency of the algorithm is shown.

Talk 2: Differential estimates for fast first-order multilevel nonconvex optimisation
Speaker: Tuomo Valkonen
Abstract: PDE constraints appear in inverse imaging problems as physical models for measurements, while bilevel optimisation can be used for optimal experimental design and parameter learning. Such problems have been traditionally very expensive to solve, but recently, effective single-loop approaches have been introduced, both in our work, as well as in the machine learning community. In this talk, we discuss a simple gradient estimation formalisation for very general single-loop methods that include primal-dual methods for the inner problem, and conventional iterative solvers (Jacobi, Gauss–Seidel, conjugate gradients) for the adjoint problem and PDE constraints.

Talk 3: Deep Equilibrium Models for Poisson Inverse Problems via Mirror Descent
Speaker: Christian Daniele
Abstract: Inverse problems in imaging arise in a wide range of scientific and engineering applications, including medical imaging, astrophysics, and microscopy. These problems are inherently ill-posed, requiring advanced regularization techniques and optimization strategies to achieve stable and accurate reconstructions. In recent years, hybrid approaches that combine deep learning and variational methods have gained increasing attention. Well-established techniques include Algorithmic Unrolling, Plug-and-Play methods, and Deep Equilibrium Models. The latter are networks with fixed points, which are trained to match data samples from a training dataset. In this work, we focus on Deep Equilibrium Models to learn a data-driven regularization function for Poisson inverse problems, using the Kullback-Leibler divergence as the data fidelity term. To effectively handle this fidelity term, we employ Mirror Descent as the underlying optimization algorithm. We discuss theoretical guarantees of convergence, even in non-convex settings, incorporating a backtracking strategy, along with key aspects of training this class of models. To validate our approach, we evaluate its performance on a deblurring task with different kernels and varying levels of Poisson noise. Authors: Luca Calatroni, Silvia Villa, Samuel Vaiter, Christian Daniele In this work, we focus on Deep Equilibrium Models to learn a data-driven regularization function for Poisson inverse problems, using the Kullback-Leibler divergence as the data fidelity term. To effectively handle this fidelity term, we employ Mirror Descent as the underlying optimization algorithm. We discuss theoretical guarantees of convergence, even in non-convex settings, incorporating a backtracking strategy, along with key aspects of training this class of models. To validate our approach, we evaluate its performance on a deblurring task with different kernels and varying levels of Poisson noise.

Speakers
JC

Juan Carlos de los Reyes

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
MH

Michael Hintermuller

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
TV

Tuomo Valkonen

MODEMAT & University of Helsinki
Nonsmooth optimisation, bilevel optimisation, inverse problems, variational analysis, optimisation in measure spaces. 
Monday July 21, 2025 4:15pm - 5:30pm PDT
Taper Hall (THH) 119 3501 Trousdale Pkwy, 119, Los Angeles, CA 90089

Attendees (1)


Log in to save this to your schedule, view media, leave feedback and see who's attending!

Share Modal

Share this link via

Or copy link