Loading…
Monday July 21, 2025 10:30am - 11:45am PDT
Session: Large-scale Optimization Algorithms and Implementations
Chair: Johannes Brust
Cluster: Computational Software

Talk 1: GPU Implementation of Algorithm NCL
Speaker: Michael Saunders
Abstract: For constrained optimization, LANCELOT solves about 10 subproblems that minimize an augmented Lagrangian subject to bounds and are immune to LICQ and MPEC difficulties. Algorithm NCL solves equivalent subproblems that are suited to nonlinear interior methods. We focus on reducing the associated KKT systems to smaller systems that can be solved by Cholesky-type factorizations. Our NCL implementation is based on MadNLP.jl (a nonlinear optimization solver working on GPU) and CUDSS.jl (a Julia interface to the NVIDIA library cuDSS). We present numerical results on large SCOPF problems (which are MPECS). (joint work with Alexis Montoison, François Pacaud, Sungho Shin, Dominique Orban)

Talk 2: A Two-Stage Optimization Based Algorithm for Tensor Decomposition
Speaker: Zequn Zheng
Abstract: Tensor canonical polyadic decomposition is important in exploring the multi-dimensional tensor structure. It has become an increasingly active area of research and has important applications in data science, statistics, and engineering. However, it is difficult to find a tensor decomposition when the tensor's rank is greater than the second largest dimension. In this case, traditional optimization methods like nonlinear least squares or alternative least squares methods usually fail to find the tensor decomposition. Direct methods typically suffer from high computation costs. We propose a novel two-stage optimization based algorithm for the general tensor decomposition problem when the rank is between the largest dimension and the second largest dimension. We will discuss the equivalence between tensor decompositions and the global minimizers of the two-stage optimization problems. We will also show promising numerical results of our algorithm compared with other state-of-the-art methods for tensor decomposition. (joint work with Hongchao Zhang)

Talk 3: A nonsmooth exact penalty method for equality-constrained optimization: Complexity and implementation
Speaker: Dominique Orban
Abstract: Penalty methods are a well known class of algorithms for constrained optimization. They transform a constrained problem into a sequence of unconstrained penalized problems in the hope that approximate solutions of the latter converge to a solution of the former. If Lagrange multipliers exist, exact penalty methods ensure that the penalty parameter only need increase a finite number of times, but are typically scorned in smooth optimization for the penalized problems are not smooth. This led researchers to consider the implementation of exact penalty methods inconvenient. Recently, advances in proximal methods have led to increasingly efficient solvers for nonsmooth optimization. We show that the exact ℓ2-penalty method for equality-constrained optimization can in fact be implemented efficiently by solving the penalized problem with a proximal-type algorithm. We study the convergence of our algorithm and establish a worst-case complexity bound of O(ϵ^{−2}) to bring a stationarity measure below ϵ > 0 under the Mangarasian-Fromowitz constraint qualification and Lipschitz continuity of the objective gradient and constraint Jacobian. In a degenerate scenario where the penalty parameter grows unbounded, the complexity becomes O(ϵ^{−8}), which is worse than another bound found in the literature. We justify the difference by arguing that our feasibility measure is properly scaled. Finally, we report numerical experience on small-scale problems from a standard collection and compare our solver with an augmented-Lagrangian and an SQP method. Our preliminary implementation is on par with the augmented Lagrangian in terms of robustness and efficiency. It is on par with the SQP method in terms of robustness, though the former remains ahead in terms of number of problem function evaluations.

Speakers
avatar for Michael Saunders

Michael Saunders

Professor (Research) Emeritus, Stanford University
Name: Dr. Slothington "Slow Convergence" McNapfaceTitle: Distinguished Professor of Continuous Optimization & Energy MinimizationAffiliation: The Lush Canopy Institute of Sluggish AlgorithmsBio:Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
DO

Dominique Orban

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
Monday July 21, 2025 10:30am - 11:45am PDT
Taper Hall (THH) 212 3501 Trousdale Pkwy, 212, Los Angeles, CA 90089

Attendees (1)


Log in to save this to your schedule, view media, leave feedback and see who's attending!

Share Modal

Share this link via

Or copy link