Loading…
Monday July 21, 2025 10:30am - 11:45am PDT
Session: Methods for Large-Scale Nonlinear Optimization I
Chair: Albert Berahas
Cluster: Nonlinear Optimization

Talk 1: Advanced, Adaptive and Flexible Algorithms for Decentralized Optimization
Speaker: Albert Berahas
Abstract: The problem of optimizing an objective function by employing a decentralized procedure using multiple agents in a connected network has gained significant attention over the last decades. This is due to the wide applicability of decentralized optimization to many important science and engineering applications such as, optimal control, machine learning, robotics, sensor networks, and smart grids. Decentralized optimization problems come in diverse shapes and forms, and could have very different characteristics. In this talk, we discuss novel flexible approaches for solving decentralized optimization problems that adapt to problem characteristics. We present two unifying algorithmic frameworks that recover popular algorithms as special cases. We discuss the rationale behind our proposed techniques, convergence in expectation and complexity guarantees for our algorithms, and present encouraging numerical results.

Talk 2: Nearly Optimal L_p Risk Minimization
Speaker: Zhichao Jia
Abstract: Convex risk measures play a foundational role in the area of stochastic optimization. However, in contrast to risk neutral models, their applications are still limited due to the lack of efficient solution methods. In particular, the mean L_p semi-deviation is a classic risk minimization model, but its solution is highly challenging due to the composition of concave-convex functions and the lack of uniform Lipschitz continuity. In this talk, we discuss some progresses on the design of efficient algorithms for L_p risk minimization, including a novel lifting reformulation to handle the concave-convex composition, and a new stochastic approximation method to handle the non-Lipschitz continuity. We establish an upper bound on the sample complexity associated with this approach and show that this bound is not improvable for L_p risk minimization in general.

Talk 3: Higher order polynomial model-based derivative-free methods
Speaker: Abraar Chaudhry
Abstract: Coming sTraditional model-based derivative free methods, such as those pioneered by Powell, iteratively construct second-order interpolation models of the objective function and optimize these models over trust-regions. Higher order polynomials may provide better models, however, they are hard to optimize. We propose a new approach of constructing and optimizing higher degree polynomial models for derivative free optimization. This is done using techniques from polynomial optimization and sum of squares. We will discuss the practical and theoretical properties of our method. oon

Speakers
AB

Albert Berahas

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
ZJ

Zhichao Jia

PhD Student, Georgia Institute of Technology
Name: Zhichao JiaTitle: PhD StudentAffiliation: Georgia Institute of Technology
Monday July 21, 2025 10:30am - 11:45am PDT
Taper Hall (THH) 208 3501 Trousdale Pkwy, 208, Los Angeles, CA 90089

Attendees (2)


Log in to save this to your schedule, view media, leave feedback and see who's attending!

Share Modal

Share this link via

Or copy link