Session: Nonsmooth PDE Constrained Optimization: Algorithms, Analysis and Applications Part 1
Chair: Denis Ridzal
Cluster: PDE-constrained Optimization
Talk 1: Digital Twins and Optimization Under Uncertainty
Speaker: Harbir Antil
Abstract: This talk begins by studying the role of risk measures, such as Conditional Value at Risk (CVaR), in identifying weaknesses in Structural Digital Twins. CVaR is shown to outperform classical expectation (risk-neutral setting) for such problems. Nevertheless, this framework assumes a knowledge of the underlying distribution. To overcome such a requirement, we introduce the notion of Rockafellian relaxation which can handle realistic distributional ambiguities. Both, risk-neutral and risk-averse formulations are discussed. Applications to real life digital twins of bridges, dams, and wind turbines are considered. Time permitting, both the static and dynamic problems arising in civil and mechanical engineering will be presented.
Talk 2: Infinite-horizon optimal control of operator equations with random inputs
Speaker: Olena Melnikov
Abstract: We investigate infinite-horizon discounted optimal control problems governed by operator equations with random inputs. Our framework includes parameterized evolution equations, such as those arising from ordinary and partial differential equations. The objective function is risk-neutral, aiming to optimize the expected discounted cost over an infinite time horizon. We establish the existence of optimal solutions. Furthermore, we discuss the convergence of sample-based approximations, demonstrating their effectiveness in approximating the true problem.
Talk 3: Nonuniform derivative-based random weight initialization for neural network optimization
Speaker: Konstantin Pieper
Abstract: Neural networks can alleviate the curse of dimensionality by detecting subspaces in the input data corresponding to large output variability. In order to exploit this, the nonlinear input weights of the network have to align with these directions during network training. As a step on the way to guess these patterns before nonlinear optimization-based neural network regression, we propose nonuniform data-driven parameter distributions for weight initialization. These parameter distributions are developed in the context of non-parametric regression models based on shallow neural networks and employ derivative data of the function to be approximated. We use recent results on the harmonic analysis and sparse representation of fully trained (optimal) neural networks to obtain densities that concentrate in appropriate regions of the input weight space. Then, we suggest simplifications of these exact densities based on approximate derivative data in the input points that allow for very efficient sampling. This leads to performance of random feature models close to optimal networks in several scenarios and compares favorably to conventional uniform random feature models.