Loading…
Session: Zeroth-Order and Derivative-Free Optimization
Chair: Anjie Ding
Cluster: nan

Talk 1: Sequentially testing a decrease condition in stochastic derivative-free optimization
Speaker: Anjie Ding
Abstract: In derivative-free optimization algorithms, the progress made by each trial step is measured by the potential decrease attained in the objective function, and in many algorithms one requires a decrease that must be sufficiently large when compared to a forcing function of the size of the step, typically a multiple of the square of the step size. When the function is stochastic, the estimation of the decrease needed to ensure the desired rate of convergence of the algorithms requires a sample size of the order of 4 of the inverse of the step size. In this talk, we introduce a new way of testing the satisfaction of a sufficient decrease condition in stochastic derivative-free optimization by framing it as a hypothesis test problem and solving it through the means of a sequential hypothesis test. The test makes a decision between two hypotheses, essentially corresponding to accepting or rejecting the sufficient decrease, outputting the probabilities of making these decisions incorrectly. For the purpose of establishing a standard (non-convex) convergence rate, such probabilities need to satisfy certain upper bounds to ensure enough correctness when the step size approaches zero. When the function noise is Gaussian, we will show that the size of the sample required to estimate the decrease drops to an order of 2 of the inverse of the step size when the decrease is far from the forcing function and the step size is not yet close to zero. In this case, this test sequentially estimates the potential decrease by observing the function (at the current and trial points) until their sum crosses either a lower or an upper bound selectively chosen as a function of the variation and the step size.

Talk 2: Which is faster: Linear minimization or Projection?
Speaker: Zev Woodstock
Abstract: It is already known that, for several important sets arising in applications, performing linear minimization can be faster than projection. However, if we consider the class of all nonempty compact convex sets, can we directly compare the computational complexity of linear minimization to that of projection? This talk provides two modest results in this direction: (1) high-precision linear minimization is no slower than projection; and (2) Exact linear minimization is no slower than projection under the additional assumption of polyhedrality.

Speakers
AD

Anjie Ding

PhD Student, Lehigh University
Name: Dr. Slothington "Slow Convergence" McNapfaceTitle: Distinguished Professor of Continuous Optimization & Energy MinimizationAffiliation: The Lush Canopy Institute of Sluggish AlgorithmsBio:Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
ZW

Zev Woodstock

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
Thursday July 24, 2025 4:15pm - 5:30pm PDT
Joseph Medicine Crow Center for International and Public Affairs (DMC) 155 3518 Trousdale Pkwy, 155, Los Angeles, CA 90089

Log in to save this to your schedule, view media, leave feedback and see who's attending!

Share Modal

Share this link via

Or copy link