Loading…
Venue: Joseph Medicine Crow Center for International and Public Affairs (DMC) 256 clear filter
arrow_back View All Dates
Thursday, July 24
 

10:30am PDT

Parallel Sessions 10N: Nonsmooth PDE Constrained Optimization: Algorithms, Analysis and Applications Part 3
Session: Nonsmooth PDE Constrained Optimization: Algorithms, Analysis and Applications Part 3
Chair: Harbir Antil
Cluster: PDE-constrained Optimization

Talk 1: An Adaptive Inexact Trust-Region Method for PDE-Constrained Optimization with Regularized Objectives
Speaker: Robert Baraldi
Abstract: We introduce an inexact trust-region method for efficiently solving regularized optimization problems governed by PDEs. In particular, we consider the class of problems in which the objective is the sum of a smooth, nonconvex function and nonsmooth, convex function. Such objectives are pervasive in the literature, with examples being basis pursuit, inverse problems, and topology optimization. The inclusion of nonsmooth regularizers and constraints is critical, as they often perserve physical properties or promote sparsity in the control. Enforcing these properties in an efficient manner is critical when met with computationally intense nature of solving PDEs. A common family of methods that can obtain accurate solutions with considerably smaller mesh sizes are adaptive finite element routines. They are critical in reducing error in solutions as well as mitigating numerical cost of solving the PDE. Our adaptive trust-region method solves the regularized objective while automatically refining the mesh for the PDE. Our method increases accuracy of the gradient and objective via local error estimators and our criticality measure. We present our numerical results on problems in control.

Talk 2: The SiMPL method for density-based topology optimization
Speaker: Dohyun Kim
Abstract: We introduce Sigmoidal mirror descent with a projected latent variable (SiMPL), a novel first-order optimization method for density-based topology optimization. SiMPL ensures point-wise bound preserving design updates and faster convergence than other popular first-order topology optimization methods. By leveraging the (negative) Fermi-Dirac entropy, we define a non-symmetric Bregman divergence that facilitates a simple yet effective update rule with the help of so-called latent variable. SiMPL produces a sequence of pointwise-feasible iterates even when high-order finite elements are used in the discretization. Numerical experiments demonstrates that the method outperforms other popular first-order optimization algorithms. We also present mesh- and order-independent convergence along with possible extension of this method.

Talk 3: Two-level Discretization Scheme for Total Variation in Integer Optimal Control
Speaker: Paul Manns
Abstract: We advance the discretization of the dual formulation of the total variation term with Raviart-Thomas functions which is known from literature for convex problems. Due to our integrality constraints, the previous analysis is not applicable anymore because, when considering a Γ-convergence-type argument, the recovery sequences generally need to attain non-integer, that is, infeasible, values. We overcome this problem by embedding a finer discretization of the input functions. A superlinear coupling of the mesh sizes implies an averaging on the coarser Raviart-Thomas mesh, which enables to recover the total variation of integer-valued limit functions with integer-valued, discretized input functions. In turn, we obtain a Γ-convergence-type result and convergence rates under additional regularity assumptions.

Speakers
avatar for Paul Manns

Paul Manns

TU Dortmund University
Bio:Paul completed his PhD at the Institute for Mathematical Optimization at Technical University of Braunschweig in 2019. Afterwards, he joined the Mathematics and Computer Science Division of Argonne National Laboratory as James H Wilkinson Fellow in Scientific Computing. In September 2021, Paul moved to TU Dortmund University as assistant professor.Paul's research focus lies on mixed-integer optimization infinite dimensions, in particular, appropriate regularization techniques and trust-region algor... Read More →
HA

Harbir Antil

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
RB

Robert Baraldi

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
DK

Dohyun Kim

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
Thursday July 24, 2025 10:30am - 11:45am PDT
Joseph Medicine Crow Center for International and Public Affairs (DMC) 256 3518 Trousdale Pkwy, 256, Los Angeles, CA 90089

1:15pm PDT

Parallel Sessions 11N: Riemannian geometry, optimization, and applications I
Session: Riemannian geometry, optimization, and applications I
Chair: Wen Huang
Cluster: Optimization on Manifolds

Talk 1: A Riemannian Proximal Newton-CG Method
Speaker: Wen Huang
Abstract: The proximal gradient method and its variants have been generalized to Riemannian manifolds for solving optimization problems in the form of $f + g$, where $f$ is continuously differentiable and $g$ may be nonsmooth. However, most of them do not have local superlinear convergence. Recently, a Riemannian proximal Newton method has been developed for optimizing problems in this form with $\mathcal{M}$ being a compact embedded submanifold and $g(x)= \lambda \|x\|_1$. Although this method converges superlinearly locally, global convergence is not guaranteed. The existing remedy relies on a hybrid approach: running a Riemannian proximal gradient method until the iterate is sufficiently accurate and switching to the Riemannian proximal Newton method. This existing approach is sensitive to the switching parameter. In this talk, we propose a Riemannian proximal Newton-CG method that merges the truncated conjugate gradient method with the Riemannian proximal Newton method. The global convergence and local superlinear convergence are proven. Numerical experiments show that the proposed method outperforms other state-of-the-art methods.

Talk 2: Manifold Identification and Second-Order Algorithms for l1-Regularization on the Stiefel Manifold
Speaker: Shixiang Chen
Abstract: In this talk, we will discuss manifold identification for the l1-regularization problem on the Stiefel manifold. First, we will demonstrate that the intersection of the identified manifold with the Stiefel manifold forms a submanifold. Building on this, we will propose a novel second-order retraction-based algorithm specifically designed for the intersected submanifold. Numerical experiments confirm that the new algorithm exhibits superlinear convergence.

Talk 3: A Riemannian Accelerated Proximal Gradient Method
Speaker: Shuailing Feng
Abstract: Riemannian accelerated gradient methods have been widely studied for smooth problem, but whether accelerated proximal gradient methods for nonsmooth composite problem on Riemannian manifolds can achieve theoretically acceleration remains unclear. Moreover, existing Riemannian accelerated gradient methods address geodesically convex and geodesically strongly convex cases separately. In this work, we introduce a unified Riemannian accelerated proximal gradient method with a rigorous convergence rate analysis for optimization problem of the form $F(x) = f(x) + h(x)$ on manifolds, where $f$ is either geodesically convex or geodesically strongly convex, and $h$ is weakly retraction-convex. Our analysis shows that the proposed method achieves acceleration under appropriate conditions. Additionally, we introduce a safeguard mechanism to ensure the convergence of the Riemannian accelerated proximal gradient method in non-convex settings. Numerical experiments demonstrate the effectiveness and theoretical acceleration of our algorithms.

Speakers
WH

Wen Huang

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
SC

Shixiang Chen

Assistant Professor, University of Science and Technology of China
I work on nonconvex optimizaiton.
SF

Shuailing Feng

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
Thursday July 24, 2025 1:15pm - 2:30pm PDT
Joseph Medicine Crow Center for International and Public Affairs (DMC) 256 3518 Trousdale Pkwy, 256, Los Angeles, CA 90089

4:15pm PDT

Parallel Sessions 12N: Polynomial Optimization & Tensor Methods
Session: Polynomial Optimization & Tensor Methods
Chair: Yang Liu
Cluster:

Talk 1: On the convergence of critical points on real varieties and applications to polynomial optimization
Speaker: Ali Mohammad Nezhad
Abstract: Let $F \in \mathrm{R}[X_1,\ldots,X_n]$ and the zero set $V=\mathrm{zero}(\{P_1,\ldots,P_s\},\mathrm{R}^n)$ be given with the canonical Whitney stratification, where $\{P_1,\ldots,P_s\} \subset \mathrm{R}[X_1,\ldots,X_n]$ and $\mathrm{R}$ is a real closed field. We explore isolated trajectories that result from critical points of $F$ on $V_{\xi}=\mathrm{zero}(\{P_1-\xi_1,\ldots,P_s-\xi_s\},\mathrm{R}^n)$ when $\xi \downarrow 0$, in the sense of stratified Morse theory. Our main motivation is the limiting behavior of log-barrier functions in polynomial optimization which leads to a central path, an underlying notion behind the theory of interior point methods. We prove conditions for the existence, convergence, and smoothness of a central path. We also consider the cases where $F$ and $P_i$ are definable functions in a (polynomially bounded) o-minimal expansion of $\mathbb{R}^n$. Joint work with Saugata Basu, Purdue University

Talk 2: APPROXIMATION OF A MOMENT SEQUENCE BY MOMENT-S.o.S HIERARCHY
Speaker: Hoang Anh Tran
Abstract: The moment-S.o.S hierarchy is a widely applicable framework to address polynomial optimization problems over basic semi-algebraic sets based on positivity certificates of polynomial. Recent works show that the convergence rate of this hierarchy over certain simple sets, namely, the unit ball, hypercube, and standard simplex, is of the order $\mathrm{O}(1/r^2)$, where $r$ denotes the level of the moment-S.o.S hierarchy. This paper aims to provide a comprehensive understanding of the convergence rate of the moment-S.o.S hierarchy by estimating the Hausdorff distance between the set of pseudo truncated moment sequences and the set of truncated moment sequences specified by Tchakaloff’s theorem. Our results provide a connection between the convergence rate of the moment-S.o.S hierarchy and the \L{}ojasiewicz exponent of the domain under the compactness assumption. Consequently, we obtain the convergence rate of $\mathrm{O}(1/r)$ for polytopes, $\mathrm{O}(1/\sqrt{r})$ for domains that either satisfy the Polyak-Łojasiewicz condition or are defined by locally strongly convex polynomials, and extends the convergence rate of $\mathrm{O}(1/r^2)$ for general polynomial over the sphere.

Talk 3: Efficient Adaptive Regularized Tensor Methods
Speaker: Yang Liu
Abstract: High-order tensor methods employing local Taylor approximations have attracted considerable attention for convex and nonconvex optimization. The pth-order adaptive regularization (ARp) approach builds a local model comprising a pth-order Taylor expansion and a (p+1)th-order regularization term, delivering optimal worst-case global and local convergence rates. However, for p≥2, subproblem minimization can yield multiple local minima, and while a global minimizer is recommended for p=2, effectively identifying a suitable local minimum for p≥3 remains elusive. This work extends interpolation-based updating strategies, originally proposed for p=2, to cases where p≥3, allowing the regularization parameter to adapt in response to interpolation models. Additionally, it introduces a new prerejection mechanism to discard unfavorable subproblem minimizers before function evaluations, thus reducing computational costs for p≥3. Numerical experiments, particularly on Chebyshev-Rosenbrock problems with p=3, indicate that the proper use of different minimizers can significantly improve practical performance, offering a promising direction for designing more efficient high-order methods.

Speakers
YL

Yang Liu

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
avatar for Hoang Anh Tran

Hoang Anh Tran

PhD student, National University of Singapore
Name: Hoang Anh TranTitle: PhD student in math at National University of SingaporeResearch topics: Semidefinite Programming, Polynomial Optimization, Optimal transport.
AM

Ali Mohammad Nezhad

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
Thursday July 24, 2025 4:15pm - 5:30pm PDT
Joseph Medicine Crow Center for International and Public Affairs (DMC) 256 3518 Trousdale Pkwy, 256, Los Angeles, CA 90089
 
  • Filter By Date
  • Filter By Venue
  • Filter By Type
  • Timezone


Share Modal

Share this link via

Or copy link

Filter sessions
Apply filters to sessions.
Filtered by Date -