Loading…
Thursday July 24, 2025 10:30am - 11:45am PDT
Session: Parallel and Distributed Optimization
Chair: Michel Lahmann
Cluster: nan

Talk 1: Ringmaster ASGD: The First Asynchronous SGD with Optimal Time Complexity
Speaker: Artavazd Maranjyan
Abstract: Asynchronous Stochastic Gradient Descent (Asynchronous SGD) is a cornerstone method for parallelizing learning in distributed machine learning. However, its performance suffers under arbitrarily heterogeneous computation times across workers, leading to suboptimal time complexity and inefficiency as the number of workers scales. While several Asynchronous SGD variants have been proposed, recent findings by Tyurin & Richtárik (NeurIPS 2023) reveal that none achieve optimal time complexity, leaving a significant gap in the literature. In this paper, we propose Ringmaster ASGD, a novel Asynchronous SGD method designed to address these limitations and tame the inherent challenges of Asynchronous SGD. We establish, through rigorous theoretical analysis, that Ringmaster ASGD achieves optimal time complexity under arbitrarily heterogeneous and dynamically fluctuating worker computation times. This makes it the first Asynchronous SGD method to meet the theoretical lower bounds for time complexity in such scenarios.

Talk 2: GPU-Based Complete and Rigorous Search for Continuous Global Optimization and Constraint Satisfaction
Speaker: Guanglu Zhang
Abstract: Continuous optimization is of significance in many science and engineering fields. In many practical applications, it is desirable, and sometimes indispensable, to find the global optimum when solving continuous nonconvex optimization problems. However, popular gradient-based methods (e.g., interior point methods) and heuristic methods (e.g., genetic algorithms) often become trapped in local optima and do not take rounding errors into account, leading to inconsistent or incorrect assumptions and applications. Several methods using interval analysis have been introduced to enclose the guaranteed global optimum for continuous nonconvex optimization, but these interval methods are significantly more computationally expensive than gradient-based methods and heuristic methods and therefore are not commonly adopted in practical applications. Importantly, the unprecedented computational power brought by modern Graphics Processing Units (GPUs) offers new opportunities to overcome the computational bottleneck in global optimization using interval analysis. However, existing interval methods cannot utilize the computational power of modern GPUs, because they are designed for CPU-based sequential computing while modern GPUs are designed for massively parallel computing with unique compute and memory architecture. Based on the research gap, we introduce a novel GPU-based, complete, and rigorous method that can efficiently enclose the guaranteed global optimum based on user-specified tolerances for continuous nonconvex optimization problems, especially for large-scale continuous nonconvex optimization problems. Using interval analysis, coupled with the computational power and architecture of GPU, the method iteratively rules out the suboptimal regions in the search domain where the global optimum cannot exist and leaves a finite set of regions where the global optimum must exist. The method is validated by enclosing the global optimum of 10 multimodal benchmark test functions with scalable dimensions, including the Levy function, Ackley function, and Griewank function. These test functions represent grand challenges of global optimization and enclosing the guaranteed global optimum of these test functions with more than 80 variables has not been reported in the literature. Our method successfully encloses the global optimum for each of these ten test functions with up to 10,000 variables using a server with one GPU in reasonable computation time, demonstrating a linear increase in total number of iterations and approximately quadratic increase in computation time with the number of variables in these optimization problems. The method is also applied to solve several practical engineering optimization problems effectively and efficiently. Corresponding Publication Zhang, G., Feng, W., & Cagan, J. (2024). A GPU-Based Parallel Region Classification Method for Continuous Constraint Satisfaction Problems. Journal of Mechanical Design, 146(4), 041705.

Talk 3: A parallelizable ADMM approach for block-structured quadratic programs
Speaker: Michel Lahmann
Abstract: Block-structured quadratic programs (QPs) arise frequently in the context of optimal control problems. The goal of our study is to provide a fast solution to these QPs which is crucial for the successful application of optimal control algorithms to many real-world problems. Existing ADMM-based QP solvers usually split the problem into an equality-constrained QP and a projection onto a box. In general, this is a proven and widely used method. Nevertheless, splitting the problem in this way does not allow the block structure of OPs arising from optimal control problems to be optimally utilized. With our new splitting of the problem we are able to utilize this block structure and solve parts of the problem in parallel. We will introduce the resulting method and show numerical results.

Speakers
AM

Artavazd Maranjyan

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
avatar for Guanglu Zhang

Guanglu Zhang

Research Scientist, Carnegie Mellon University
Name: Dr. Guanglu ZhangTitle: GPU-Based Complete and Rigorous Search for Continuous Global Optimization and Constraint SatisfactionBio: Dr. Zhang is a research scientist at Carnegie Mellon University. His research in mathematical optimization employs interval analysis, coupled with... Read More →
ML

Michel Lahmann

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
Thursday July 24, 2025 10:30am - 11:45am PDT
Taper Hall (THH) 112 3501 Trousdale Pkwy, 112, Los Angeles, CA 90089

Log in to save this to your schedule, view media, leave feedback and see who's attending!

Share Modal

Share this link via

Or copy link