Loading…
Tuesday July 22, 2025 1:15pm - 2:30pm PDT
Session: Methods for Large-Scale Nonlinear Optimization V
Chair: Raghu Bollapragada
Cluster: Nonlinear Optimization

Talk 1: The Interior-Point Frank-Wolfe Method for Minimizing Smooth Convex Functionals over Linearly Constrained Probability Spaces
Speaker: Di Yu
Abstract: Several important and large problem classes including optimal experimental design, emergency volunteer response, spectral optimal transport, and certain queuing problems can be posed as that of minimizing a smooth convex functional over the space of compactly supported probability measures subject to a linear constraint map. We introduce the interior-point Frank-Wolfe (IPFW) method for solving such problems, where a sequence of barrier problems is constructed as a way to handle the constraints, each of which is solved to increasing accuracy using a Frank-Wolfe method. Importantly, the Frank-Wolfe sub-problems are shown to have a ``closed-form'' solution expressed in terms of the constraint functionals and the influence function of the objective. The sequence of probability measures resulting from the IPFW framework is shown to converge in a certain sense to the correct solution, along with a complexity calculation. The problem and the proposed solution is illustrated through examples.

Talk 2: Local convergence of adaptively regularized tensor methods
Speaker: Karl Welzel
Abstract: Tensor methods are methods for unconstrained continuous optimization that can incorporate derivative information of up to order p > 2 by computing a step based on the pth-order Taylor expansion at each iteration. The most important among them are regularization-based tensor methods which have been shown to have optimal worst-case iteration complexity of finding an approximate minimizer. Moreover, as one might expect, this worst-case complexity improves as p increases, highlighting the potential advantage of tensor methods. Still, the global complexity results only guarantee pessimistic sublinear rates, so it is natural to ask how local rates depend on the order of the Taylor expansion p. In the case of functions that are uniformly convex of order q (ones that around the minimizer x* grow like the distance to x* to the qth power) and a fixed regularization parameter, the answer is given in a paper by Doikov and Nesterov from 2022: we get (p/(q-1))th-order local convergence of function values and gradient norms, if p > q-1. In particular, when the Hessian is positive definite at the minimizer (q=2) we get pth-order convergence, but also when the Hessian is singular at x* (q > 2) superlinear convergence (compared to Newton's linear convergence) is possible as long as enough derivatives are available. The value of the regularization parameter in their analysis depends on the Lipschitz constant of the pth derivative. Since this constant is not usually known in advance, adaptive regularization methods are more practical. We extend the local convergence results to locally uniformly convex functions and fully adaptive methods. We discuss how for p > 2 it becomes crucial to select the "right" minimizer of the regularized local model in each iteration to ensure all iterations are eventually successful. Counterexamples show that in particular the global minimizer of the subproblem is not suitable in general. If the right minimizer is used, the (p/(q-1))th-order local convergence is preserved, otherwise the rate stays superlinear but with an exponent arbitrarily close to one depending on the algorithm parameters.

Talk 3: Noise-Aware Sequential Quadratic Programming for Equality Constrained Optimization with Rank-Deficient Jacobians
Speaker: Jiahao Shi
Abstract: We propose and analyze a sequential quadratic programming algorithm for minimizing a noisy nonlinear smooth function subject to noisy nonlinear smooth equality constraints. The algorithm uses a step decomposition strategy and, as a result, is robust to potential rank-deficiency in the constraints, allows for two different step size strategies, and has an early stopping mechanism. Under linear independence constraint qualifications, convergence is established to a neighborhood of a first-order stationary point, where the radius of the neighborhood is proportional to the noise level in the objective function and constraints. Moreover, in the rank deficient setting, convergence to a neighborhood of an infeasible stationary point is established. Numerical experiments demonstrate the efficiency and robustness of the proposed method.

Speakers
RB

Raghu Bollapragada

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
DY

Di Yu

Name: Dr. Slothington "Slow Convergence" McNapfaceTitle: Distinguished Professor of Continuous Optimization & Energy MinimizationAffiliation: The Lush Canopy Institute of Sluggish AlgorithmsBio:Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
KW

Karl Welzel

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
JS

Jiahao Shi

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
Tuesday July 22, 2025 1:15pm - 2:30pm PDT
Taper Hall (THH) 101 3501 Trousdale Pkwy, 101, Los Angeles, CA 90089

Log in to save this to your schedule, view media, leave feedback and see who's attending!

Share Modal

Share this link via

Or copy link