Session: Bilevel Optimization for Inverse Problems Part 2
Chair: Juan Carlos de los Reyes
Cluster: PDE-constrained Optimization
Talk 1: Linesearch-enhanced inexact forward–backward methods for bilevel optimization
Speaker: Marco Prato
Abstract: Bilevel optimization problems arise in various real-world applications, often being characterized by the impossibility of having the exact objective function and its gradient available. Developing mathematically sound optimization methods that effectively handle inexact information is crucial for ensuring reliable and efficient solutions. In this talk we propose a line-search based algorithm for solving a bilevel optimization problem, where the approximate gradient and function evaluation obeys an adaptive tolerance rule. Our method is based on implicit differentiation under some standard assumptions, and its main novelty with respect to similar approaches is the well posed, inexact line-search procedure using only approximate function values and adaptive accuracy control. This work is partially supported by the PRIN project 20225STXSB, under the National Recovery and Resilience Plan (NRRP) funded by the European Union - NextGenerationEU.
Talk 2: Tensor train solution to uncertain optimization problems with shared sparsity penalty
Speaker: Akwum Onwunta
Abstract: We develop first- and second-order numerical optimization methods to solve non-smooth optimization problems featuring a shared sparsity penalty, constrained by differential equations with uncertainty. To alleviate the curse of dimensionality, we use tensor product approximations. To handle the non-smoothness of the objective function, we introduce a smoothed version of the shared sparsity objective. We consider both a benchmark elliptic PDE constraint and a more realistic topology optimization problem. We demonstrate that the error converges linearly in iterations and the smoothing parameter and faster than algebraically in the number of degrees of freedom, consisting of the number of quadrature points in one variable and tensor ranks. Moreover, in the topology optimization problem, the smoothed shared sparsity penalty reduces the tensor ranks compared to the unpenalized solution. This enables us to find a sparse high-resolution design under a high-dimensional uncertainty.
Talk 3: TBD
Speaker: Juan Carlos de los Reyes
Abstract: TBD