Loading…
Wednesday July 23, 2025 4:15pm - 5:30pm PDT
Session: Automatic Differentiation as a Tool for Computational Science
Chair: Sri Hari Krishna Narayanan
Cluster: Computational Software

Talk 1: EnzymeMLIR: High-Performance Automatic Differentiation of Tensor Code
Speaker: William Moses
Abstract: Automatic differentiation (AD) is key to training neural networks, Bayesian inference, and scientific computing. Applying these techniques requires rewriting code in a specific machine learning framework or manually providing derivatives. This talk presents Enzyme, a high-performance automatic differentiation compiler plugin for the LLVM and MLIR compiler frameworks. Enzyme differentiates programs in any language whose compiler targets LLVM/MLIR, including C/C++, Fortran, Julia, Rust, Swift, JaX, etc., thereby providing native AD capabilities in these languages with state-of-the-art performance. Unlike traditional tools, Enzyme performs AD on optimized IR. On a combined machine-learning and scientific computing benchmark suite, AD on optimized IR achieves a geometric mean speedup of 4.2x over AD on IR before optimization. This talk will also include work that makes Enzyme the first fully automatic reverse-mode AD tool to generate gradients of existing GPU kernels as well as the benefits of operating within high-level structured representations, like MLIR.

Talk 2: Challenges with Implementing Differentiable Quantum Dynamics
Speaker: Sri Hari Krishna Narayanan
Abstract: Differentiable quantum dynamics require automatic differentiation of a complex-valued initial value problem, which numerically integrates a system of ordinary differential equations from a specified initial condition, as well as the eigendecomposition of a matrix. This work is a survey of existing differentiable programming frameworks for these tasks, finding that no framework natively supports our application requirements fully. We therefore demonstrate a need for broader support of complex-valued, differentiable numerical integration in scientific computing libraries. We further demonstrate that the derivatives of our quantum dynamics application can be computed through a combination of differentiable programming frameworks and handcoding.

Talk 3: Leveraging Automatic Differentiation to Improve Ice-Sheet and Ocean Modeling
Speaker: Shreyas Gaikwad
Abstract: Mathematical modeling of geophysical fluids is a complex undertaking that necessarily involves several approximations (constitutive models, spatial discretization, and subgrid-scale parameterizations) to close a system of high-fidelity equations such as conservation of mass, momentum, energy, and tracers. Examples of such parameters include those used to represent aerosol and cloud microphysics in the atmosphere, the coefficients of the mixing parameterizations in the ocean, and the basal sliding coefficients below ice sheets. Model boundary and initial conditions are also required and often poorly constrained. Meaningful interpretation of model output therefore demands investigation of the impact of these uncertain parameters, initial conditions, and boundary conditions on the simulated state, and an effort to identify their "best" values for some specific metric. In the context of ice sheet and ocean modeling, gradients of model-data misfits or other QoI with respect to the various uncertain parameters, boundary conditions, or initial conditions are a key ingredient for performing sensitivity analysis, model calibration, state estimation, or uncertainty quantification (UQ), which guide the improvement of model simulations through PDE-constrained gradient-based optimization. We present new frameworks for generating derivative code, i.e., tangent linear and adjoint models, of an ice sheet model, SICOPOLIS, and an ocean model, MITgcm. These derivative operators are powerful computational engines to efficiently compute comprehensive gradients or sensitivities of scalar-valued model output, including least-squares model-data misfits or important quantities of interest, to high-dimensional model inputs (such as model initial conditions, parameter fields, or boundary conditions). Both frameworks leverage Tapenade, an open-source Automatic Differentiation tool, to generate and maintain up-to-date derivative codes. Both frameworks are open-source and freely available.

Speakers
WM

William Moses

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
SH

Sri Hari Krishna Narayanan

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
SG

Shreyas Gaikwad

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
Wednesday July 23, 2025 4:15pm - 5:30pm PDT
Taper Hall (THH) 102 3501 Trousdale Pkwy, 102, Los Angeles, CA 90089

Attendees (2)


Log in to save this to your schedule, view media, leave feedback and see who's attending!

Share Modal

Share this link via

Or copy link