Loading…
Monday July 21, 2025 10:30am - 11:45am PDT
Session: Advances in Optimization for AI: From Theory to Practice
Chair: ELHoucine BERGOU
Cluster:

Talk 1: Dynamic Inertial Newton Inspired Algorithms And Their Applications to Non-Smooth and Non-Convex Optimizations.
Speaker 1: Xin Li
Abstract: Dynamic Inertial Newton (DIN) systems have been used to motivate or explain several accelerated algorithms to solve a wide range of optimization problems. In this presentation, we propose new algorithms inspired by DIN and establish their convergence analysis. Furthermore, we discuss how to apply these algorithms to the applications in deep learning via a smoothing approximation approach. The motivation for our study is two-folds: we give a direct approach to find a critical point of the objective functions, and we want to cover most problems in machine learning whose objective functions are known to be non-smooth and non-convex.


Talk 2: Tuning Free Alignment of Generative AI
Speaker 2: Amrit Singh Bedi
Abstract: In text-to-image diffusion models, inference-time alignment has emerged as a promising alternative to fine-tuning, which is often computationally prohibitive. By modifying the sampling procedure, these methods can efficiently adapt pre-trained diffusion models to new human objectives—also referred to as target rewards. However, they frequently encounter reward hacking, wherein under-specified reward functions lead the model to generate images that stray significantly from the intended prompts. To address this issue, we propose MIRA, an alignment-based framework that reformulates direct noise optimization as a constrained reward maximization problem. Our approach introduces a regularization term that preserves prompt fidelity while pursuing high-reward outputs, thereby reducing reward hacking. In comprehensive experiments across multiple reward functions, MIRA consistently achieves win rates of over 60% against baseline methods, all while demonstrating robust adherence to the original prompts.

Talk 3: Matrix and Tensor completions with cross-concentrated sampling: Bridging uniform sampling and CUR sampling
Speaker 3: Hanqin Cai
Abstract: While uniform sampling has been widely studied in the matrix and tensor completion literature, CUR sampling approximates a low-rank matrix/tensor via mode-wise samples. Unfortunately, both sampling models lack flexibility for various circumstances in real-world applications. In this work, we propose a novel and easy-to-implement sampling strategy, coined Cross-Concentrated Sampling (CCS). By bridging uniform sampling and CUR sampling, CCS provides extra flexibility that can potentially save sampling costs in applications. In addition, we also provide a sufficient condition for CCS-based matrix completion. Moreover, we propose a highly efficient non-convex algorithm, termed Iterative CUR Completion (ICURC), for the proposed CCS model. Numerical experiments verify the empirical advantages of CCS and ICURC against uniform sampling and its baseline algorithms on both synthetic and real-world datasets.
Speakers
XL

Xin Li

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
AS

Amrit Singh Bedi

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
HC

Hanqin Cai

Assistant Professor, University of Central Florida
Monday July 21, 2025 10:30am - 11:45am PDT
Taper Hall (THH) 215 3501 Trousdale Pkwy, 215, Los Angeles, CA 90089

Attendees (1)


Log in to save this to your schedule, view media, leave feedback and see who's attending!

Share Modal

Share this link via

Or copy link