Loading…
Tuesday July 22, 2025 10:30am - 11:45am PDT
Session: Methods for Large-Scale Nonlinear Optimization IV
Chair: Albert Berahas
Cluster: Nonlinear Optimization

Talk 1: High Probability Analysis for Negative Curvature Methods with Probabilistic Oracles
Speaker: Wanping Dong
Abstract: We consider a negative curvature method for continuous nonlinear nonconstrained optimization problems under a stochastic setting where the function values, gradients, and Hessian (products) are available only through inexact probabilistic oracles. Our goal is to develop algorithms that have high probabilistic second-order convergence and affordable complexity so that they can be used for large-scale problems. We introduce general conditions on the probabilistic oracles and propose a method that dynamically chooses between negative curvature and descent steps. We derive a high probability tail bound on the iteration complexity of the algorithm and show improvements compared to our previous negative curvature method. A practical variant is implemented to illustrate the power of the proposed algorithm.

Talk 2: Retrospective Approximation for Stochastic Constrained Problems Using Sequential Quadratic Programming
Speaker: Shagun Gupta
Abstract: Sequential Quadratic Programming (SQP) is one of the state-of-the-art algorithms used to solve deterministic constrained nonlinear optimization problems. In recent years, the framework has been extended to solve deterministic equality and inequality constrained problems with stochastic objective functions. In response to the challenges posed by stochasticity, various schemes have been incorporated into SQP algorithms to adapt key parameters, such as step size and merit parameter, from the deterministic setting to the stochastic setting. These include stochastic line search, Lipschitz constant estimation, and Hessian averaging. In our work, we leverage SQP algorithms within the innovative framework of Retrospective Approximation. This framework introduces a novel approach to solving stochastic constrained problems by allowing the SQP algorithm to solve a series of subsampled deterministic subproblems. Each deterministic subproblem is solved not to optimality, but to a specified accuracy with an increasing sample size for the subsampled deterministic problems. This strategic decoupling of stochasticity from the SQP algorithm proves instrumental, enabling the utilization of legacy deterministic solvers. Thus, by decoupling stochasticity, the Retrospective Approximation framework facilitates the integration of legacy deterministic solvers, reducing requirements for hyper-parameter tuning in stochastic settings. We provide theoretical convergence requirements for the increase in the subsampling batch size and required solution accuracy for deterministic subproblems. We also conduct numerical experiments to showcase the utilization of legacy deterministic solvers for stochastic constrained problems.

Talk 3: Stochastic Second-order Inexact Augmented Lagrangian Framework for Nonconvex Expectation Constrained Optimization
Speaker: Yash Kumar
Abstract: In this talk, we present methods for solving stochastic nonconvex optimization problems where both the objective function and the constraints are expectations of stochastic functions. We consider an inexact Augmented Lagrangian framework for solving these problems, employing stochastic second-order methods for the subproblems instead of first-order methods. This framework ensures convergence to second-order stationary points instead of approximate first-order stationary points. Furthermore, these methods do not require access to full Hessians but only Hessian-vector products, which are typically twice the computational cost of gradients. We provide convergence guarantees for the stochastic second-order inexact Augmented Lagrangian framework, along with total computational complexity guarantees for various second-order subproblem solvers. Numerical experiments on constrained machine learning classification problems demonstrate the efficiency of the proposed framework.

Speakers
AB

Albert Berahas

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
WD

Wanping Dong

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
SG

Shagun Gupta

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
YK

Yash Kumar

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
Tuesday July 22, 2025 10:30am - 11:45am PDT
Taper Hall (THH) 208 3501 Trousdale Pkwy, 208, Los Angeles, CA 90089

Attendees (1)


Log in to save this to your schedule, view media, leave feedback and see who's attending!

Share Modal

Share this link via

Or copy link