Loading…
Tuesday July 22, 2025 10:30am - 11:45am PDT
Session: Optimization for Neural Network Pruning and Quantization
Chair: Lin Xiao
Cluster: Optimization Applications (Communication, Energy, Health, ML, ...)

Talk 1: Understanding Neural Network Quantization and its Robustness Against Data Poisoning Attacks
Speaker: Yiwei Lu
Abstract: Neural network quantization, exemplified by BinaryConnect (BC) and its variants, has become a standard approach for model compression. These methods typically employ the sign function in the forward pass, with various approximations used for gradient computation during backpropagation. While effective, these techniques often rely on heuristics or "training tricks" that lack theoretical grounding. This talk explores the optimization perspective of these quantization methods, introducing forward-backward quantizers as a principled framework. We present ProxConnect++ (PC++), a generalization that encompasses existing quantization techniques and provides automatic theoretical guarantees. Furthermore, we reveal an unexpected benefit of neural network quantization: enhanced robustness against data poisoning attacks.

Talk 2: Convex Regularizations for Pruning- and Quantization-Aware Training of Neural Networks
Speaker: Lin Xiao
Abstract: We present a convex regularization approach for pruning- and quantization-aware training of deep neural networks. While it is well-known that group Lasso can induce structured pruning, we show that convex, piece-wise affine regularizations (PAR) can effectively induce quantization. Previous work have limited success in practice due to the challenge of integrating structured regularization with stochastic gradient methods. We derive an aggregate proximal stochastic gradient method (AProx) that can successfully produce desired pruning and quantization results. Moreover, we establish last-iterate convergence of the method, which better supports the computational practice than the classical theory of average-iterate convergence.

Talk 3: Quantization through Piecewise-Affine Regularization: Optimization and Statistical Guarantees
Speaker: Jianhao Ma
Abstract: Optimization problems involving discrete or quantized variables can be very challenging due to the combinatorial nature of the design space. We show that (coordinate-wise) piecewise-affine regularization (PAR) can effectively induce quantization in the optimization variables. PAR provides a flexible modeling and computational framework for quantization based on continuous and convex optimization. In addition, for linear regression problems, we can approximate $\ell_1$- and squared $\ell_2$-regularizations using different parameterizations of PAR, and obtain statistical guarantees that are similar to those of Lasso and ridge regression, all with quantized regression variables.

Speakers
YL

Yiwei Lu

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
LX

Lin Xiao

Lin Xiao is a Research Scientist at Facebook AI Research (FAIR) in Seattle, Washington. He received BE from Beijing University of Aeronautics and Astronautics (Beihang University) and PhD from Stanford University, and was a postdoctoral fellow in the Center for the Mathematics of... Read More →
JM

Jianhao Ma

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
Tuesday July 22, 2025 10:30am - 11:45am PDT
Taper Hall (THH) 201 3501 Trousdale Pkwy, 201, Los Angeles, CA 90089

Attendees (1)


Log in to save this to your schedule, view media, leave feedback and see who's attending!

Share Modal

Share this link via

Or copy link