Loading…
Tuesday July 22, 2025 4:15pm - 5:30pm PDT
Session: AI Meets Optimization (Part 2)
Chair: Wotao Yin
Cluster: Optimization for Emerging Technologies (LLMs, Quantum Computing, ...)

Talk 1: Scientific Machine Learning for Optimization and Control
Speaker: Jan Drgona
Abstract: This talk presents a scientific machine learning perspective (SciML) on modeling, optimization, and control. Specifically, we will discuss the opportunity to develop a unified SciML framework for modeling dynamical systems, learning to optimize, and learning to control methods. We demonstrate the application of these emerging SciML methods in a range of engineering case studies, including modeling of networked dynamical systems, building control, and dynamic economic dispatch problem in power systems. Furthermore, we will introduce the NeuroMANCER open-source library facilitating the implementation and prototyping of diverse SciML methods for a broad range of application problems.

Talk 2: What Can and Cannot Be Learned by Implicit Models?
Speaker: Jialin Liu
Abstract: Implicit models represent an efficient class of deep neural networks. Unlike explicit models, which are trained to map inputs to outputs in an end-to-end manner, implicit models are designed so that their fixed points align with the data. Their main advantage, as often noted in the literature, is memory efficiency—they store only a single layer but can function at any depth. However, this talk focuses on another important but overlooked benefit: greater expressive power. Specifically, we identify a class of target mappings that implicit models can represent, whereas explicit models cannot. Conversely, we also establish that implicit models are inherently limited to this class of mappings. To illustrate the practical relevance of these findings, we present case studies across imaging, scientific computing, and operations research, demonstrating the broad applicability of these target mappings.

Talk 3: Learning-to-Optimize via Implicit Networks
Speaker: Samy Wu Fung
Abstract: Learning-to-Optimize (or L2O) is an emerging approach where machine learning is used to learn an optimization algorithm. It automates the design of an optimization method based on its performance on a set of training problems. However, learning optimization algorithms in an end-to-end fashion can be challenging due to their asymptotic nature. This talk discusses a class of network architectures, called implicit networks, whose outputs are defined by a fixed point (or optimality) condition, which makes them naturally suited for L2O. We will cover how to train and design these networks efficiently.

Speakers
JD

Jan Drgona

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
avatar for Jialin Liu

Jialin Liu

Assistant Professor, University of Central Florida
Name: Jialin LiuTitle: Assistant ProfessorAffiliation: University of Central FloridaBio:Jialin Liu earned his B.S. degree in Automation from Tsinghua University in 2015 and his Ph.D. in Applied Mathematics from the University of California, Los Angeles (UCLA) in 2020. He is currently... Read More →
SW

Samy Wu Fung

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
WY

Wotao Yin

Sponsor, Damo/Alibaba
Tuesday July 22, 2025 4:15pm - 5:30pm PDT
Taper Hall (THH) 208 3501 Trousdale Pkwy, 208, Los Angeles, CA 90089

Attendees (2)


Log in to save this to your schedule, view media, leave feedback and see who's attending!

Share Modal

Share this link via

Or copy link