Session: AI Meets Optimization (Part 1)
Chair: Jialin Liu
Cluster: Optimization for Emerging Technologies (LLMs, Quantum Computing, ...)
Talk 1: Data-Driven Performance Guarantees for Classical and Learned Optimizers
Speaker: Bartolomeo Stellato
Abstract: We introduce a data-driven approach to analyze the performance of continuous optimization algorithms using generalization guarantees from statistical learning theory. We study classical and learned optimizers to solve families of parametric optimization problems. We build generalization guarantees for classical optimizers, using a sample convergence bound, and for learned optimizers, using the Probably Approximately Correct (PAC)-Bayes framework. To train learned optimizers, we use a gradient-based algorithm to directly minimize the PAC-Bayes upper bound. Numerical experiments in signal processing, control, and meta-learning showcase the ability of our framework to provide strong generalization guarantees for both classical and learned optimizers given a fixed budget of iterations. For classical optimizers, our bounds are much tighter than those that worst-case guarantees provide. For learned optimizers, our bounds outperform the empirical outcomes observed in their non-learned counterparts.
Talk 2: Differentiating through Solutions to Optimization Problems in Decision-Focused Learning
Speaker: Howard Heaton
Abstract: Many real-world problems can be framed as optimization problems, for which well-established algorithms exist. However, these problems often involve key parameters that are not directly observed. Instead, we typically have access to data that is correlated with these parameters, though the relationships are complex and difficult to describe explicitly. This challenge motivates the integration of machine learning with optimization: using machine learning to predict the hidden parameters and optimization to solve the resultant problem. This integration is known as decision-focused learning. In this talk, I will introduce decision-focused learning, with a particular focus on differentiating through solutions to optimization problems and recent advances in effectively scaling these computations.
Talk 3: Automation in Optimization: Enhancing Decomposition for Proximal and Parallel Methods
Speaker: Wotao Yin
Abstract: Optimization practitioners encounter substantial hurdles when transforming and decomposing optimization problems, mainly when these problems comprise diverse components. Some elements may be ideally suited for proximal operators, which excel at managing non-smooth or constrained functions, while others lend themselves to parallel computing, enabling faster computation through distributed workloads. Practitioners must manually determine how to integrate these approaches effectively, demanding deep expertise and considerable time. An automated process could transform this landscape by analyzing problem structures and seamlessly applying the most appropriate techniques to each component. In addition, automation could harness recent advancements in automated parameter selection and acceleration techniques for first-order algorithms, which enhance convergence speed and performance without manual tuning. We introduce an automated system that optimizes computational resources and delivers high-performance solutions, allowing experts to concentrate on strategic tasks like problem formulation and result interpretation.