Loading…
Thursday July 24, 2025 10:30am - 11:45am PDT
Session: Advances in Bilevel Optimization: Algorithms and Applications
Chair: Mingyi Hong & Prashant Khanduri
Cluster: Nonlinear Optimization

Talk 1: Barrier Function for Bilevel Optimization with Coupled Lower-Level Constraints: Formulation, Approximation and Algorithms
Speaker: Xiaotian Jiang
Abstract: In this paper, we consider bilevel optimization problem where the lower-level has coupled constraints, i.e. the constraints depend both on the upper- and lower-level variables. In particular, we consider two settings for the lower-level problem. The first is when the objective is strongly convex and the constraints are convex with respect to the lower-level variable; The second is when the lower-level is a linear program. We propose to utilize a barrier function reformulation to translate the problem into an unconstrained problem. By developing a series of new techniques, we proved that both the hyperfunction value and hypergradient of the barrier reformulated problem (uniformly) converge to those of the original problem under minimal assumptions. Further, to overcome the non-Lipschitz smoothness of hyperfunction and lower-level problems for barrier reformulated problems, we design an adaptive algorithm that ensures a non-asymptotic convergence guarantee. We also design an algorithm that converges to the stationary point of the original problem asymptotically under certain assumptions. The proposed algorithms require minimal assumptions, and to our knowledge, they are the first with convergence guarantees when the lower-level problem is a linear program.

Talk 2: A Discretization Approach for Low-Dimensional Non-Convex Bilevel Optimization
Speaker: Prashant Khanduri
Abstract: Bilevel optimization has become an invaluable tool in areas like machine learning and operations research. Most modern bilevel algorithms are developed for problems with strongly convex and/or unconstrained lower-level tasks. In this work, we deal with a class of problems where the lower-level task is non-convex and constrained. To solve this class of challenging problems, we propose a novel discretized value-function-based approach wherein the value function and the respective problem reformulation are discretized. Under the proposed approach, the value function’s optimization problem is transformed into an easier convex optimization task via discretization. We tackle the discretized problem using a penalty approach and establish the relation between the KKT points, and the local and global minima of the two problems. We develop a gradient descent-based algorithm to solve the penalty reformulated problem. We establish finite-time convergence guarantees for the developed algorithm. Finally, we perform numerical experiments to confirm the effectiveness of the proposed method.

Talk 3: An Accelerated Gradient Method for Convex Simple Bilevel Optimization
Speaker: Jincheng Cao
Abstract: In this paper, we focus on simple bilevel optimization problems, where we minimize a convex smooth objective function over the optimal solution set of another convex smooth constrained optimization problem. We present a novel bilevel optimization method that locally approximates the solution set of the lower-level problem using a cutting plane approach and employs an accelerated gradient-based update to reduce the upper-level objective function over the approximated solution set. We measure the performance of our method in terms of suboptimality and infeasibility errors and provide non-asymptotic convergence guarantees for both error criteria. Specifically, when the feasible set is compact, we show that our method requires at most $\mathcal{O}(\max\{1/\sqrt{\epsilon_{f}}, 1/\epsilon_g\})$ iterations to find a solution that is $\epsilon_f$-suboptimal and $\epsilon_g$-infeasible. Moreover, under the additional assumption that the lower-level objective satisfies the $r$-th Hölderian error bound, we show that our method achieves an iteration complexity of $\tilde{\mathcal{O}}(\max\{\epsilon_{f}^{-\frac{2r-1}{2r}},\epsilon_{g}^{-\frac{2r-1}{2r}}\})$, which matches the optimal complexity of single-level convex constrained optimization when $r=1$.

Speakers
XJ

Xiaotian Jiang

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
PK

Prashant Khanduri

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
JC

Jincheng Cao

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
Thursday July 24, 2025 10:30am - 11:45am PDT
Taper Hall (THH) 116 3501 Trousdale Pkwy, 116, Los Angeles, CA 90089

Attendees (1)


Log in to save this to your schedule, view media, leave feedback and see who's attending!

Share Modal

Share this link via

Or copy link