Loading…
Wednesday July 23, 2025 4:15pm - 5:30pm PDT
Session: Modern Polynomial Optimization in Data Science II
Chair: Xindong Tang
Cluster: Conic and Semidefinite Optimization

Talk 1: Loss surface of deep neural networks with polynomial activation functions
Speaker: Tingting Tang
Abstract: TBD

Talk 2: Low-precision tensor decomposition and its applications in data science
Speaker: Zi Yang
Abstract: Tensors are high-order generalizations of matrices and are widely used to represent multi-dimensional data arrays in data science. Dealing with large-scale tensors is memory and computation intensive, prohibiting their applications in many resource-limited scenarios. Low-precision computation is to save and compute using lower bits, reducing memory, and accelerating computation. In this talk, we will explore the application of low-precision computation to large-scale tensor problems. Specifically, we present a mixed-precision block stochastic gradient descent method for CP tensor decomposition. Our approach uses lower-bit fixed-point representations, such as INT8, INT4, and INT2, to compute gradients more efficiently. Numerical experiments on both synthetic and real-world tensor datasets demonstrate the superior efficiency of our mixed-precision algorithm compared to full-precision CP decomposition. This work significantly reduces memory, computing, and energy costs, making it particularly useful for resource-constrained edge computing devices. We will also discuss how low-precision tensor computation can compress large AI models and accelerate both their training and inference.

Talk 3: Global Convergence of High-Order Regularization Methods with Sums-of-Squares Taylor model
Speaker: Wenqi Zhu
Abstract: High-order tensor methods that employ Taylor-based local models (of degree $p\ge 3$) within adaptive regularization frameworks have been recently proposed for both convex and nonconvex optimization problems. They have been shown to have superior, and even optimal, worst-case global convergence rates and local rates compared to Newton's method. Finding rigorous and efficient techniques for minimizing the Taylor polynomial sub-problems remains a challenging aspect for these algorithms. Ahmadi et al \cite{ahmadi2023higher} recently introduced a tensor method based on sum-of-squares (SoS) reformulations, so that each Taylor polynomial sub-problem in their approach can be tractably minimized using semidefinite programming (SDP) \cite{ahmadi2023higher}; however, the global convergence and complexity of their method have not been addressed for general nonconvex problems. In this talk, we introduce an algorithmic framework that combines the Sum of Squares (SoS) Taylor model with adaptive regularization techniques for nonconvex smooth optimization problems. Each iteration minimizes an SoS-convex Taylor model, offering a polynomial cost per iteration. For general nonconvex functions, the worst-case evaluation complexity bound is $\mathcal{O}(\epsilon^{-2})$, while for strongly convex functions, an improved evaluation complexity bound of $\mathcal{O}(\epsilon^{-\frac{1}{p}})$ is established. To the best of our knowledge, this is the first global rate analysis for an adaptive regularization algorithm with a tractable high-order sub-problem in nonconvex smooth optimization, opening the way for further improvements.

Speakers
XT

Xindong Tang

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
TT

Tingting Tang

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
ZY

Zi Yang

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
WZ

Wenqi Zhu

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
Wednesday July 23, 2025 4:15pm - 5:30pm PDT
Taper Hall (THH) 118 3501 Trousdale Pkwy, 118, Los Angeles, CA 90089

Log in to save this to your schedule, view media, leave feedback and see who's attending!

Share Modal

Share this link via

Or copy link