Loading…
Monday July 21, 2025 4:15pm - 5:30pm PDT
Session: Deterministic and Stochastic Methods for Optimization and Games- Part II
Chair: Angelia Nedich
Cluster: Multi-agent Optimization and Games

Talk 1: Frequentist Guarantees of Distributed (Non)-Bayesian Inference
Speaker: Bohan Wu
Abstract: Motivated by the need to analyze large, decentralized datasets, distributed Bayesian inference has become a critical research area across multiple fields, including statistics, electrical engineering, and economics. This paper establishes Frequentist properties, such as posterior consistency, asymptotic normality, and posterior contraction rates, for the distributed (non-)Bayes Inference problem among agents connected via a communication network. Our results show that, under appropriate assumptions on the communication graph, distributed Bayesian inference retains parametric efficiency while enhancing robustness in uncertainty quantification. We also explore the trade-off between statistical efficiency and communication efficiency by examining how the design and size of the communication graph impact the posterior contraction rate. Furthermore, We extend our analysis to time-varying graphs and apply our results to exponential family models, distributed logistic regression, and decentralized detection models.

Talk 2: Decentralized high-dimensional inference over mesh networks: a unified perspective
Speaker: Marie Maros
Abstract: We consider the problem of solving high-dimensional statistical inference problems over a network of agents (with no coordinating agent) who have exclusive access to a fraction of the total available samples. In the high-dimensional setting, the problem dimension is much larger than the total number of available samples, making the problem ill-conditioned. Despite this, we empirically observe that obtaining a statistically meaningful solution is possible with many existing decentralized schemes, given that the underlying parameter to estimate lies in a low dimensional subspace. Our observations challenge the existing theories in two key ways: (i) most decentralized schemes do not break down as the problem dimensionality increases, and (ii) decentralized schemes that are expected to behave like one another behave very differently in high dimensions. To understand the behavior of decentralized optimization methods in high-dimensional inference we introduce a unified framework and analysis, allowing us to develop an understanding of the mechanisms enabling dimension independent performance of decentralized schemes.

Talk 3: Toward Parameter-free Decentralized Optimization
Speaker: Gesualdo Scutari
Abstract: We study the minimization of (locally strongly) convex, locally smooth functions over a network of agents without a centralized server. Existing decentralized algorithms require knowledge of problem and network parameters, such as the Lipschitz constant of the global gradient and/or network connectivity, for hyperparameter tuning. Agents usually cannot access this information, leading to conservative selections, slow convergence, or divergence. We introduce a decentralized algorithm that eliminates the need for specific parameter tuning. Our approach employs an operator splitting technique with a novel variable metric, enabling a local backtracking line-search to adaptively select the stepsize without global information or extensive communications. This results in favorable convergence guarantees and dependence on optimization and network parameters compared to existing nonadaptive methods. Theoretical findings are supported by numerical experiments.

Speakers
AN

Angelia Nedich

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
MM

Marie Maros

Assistant Professor, Texas A&M University
Name: Dr. Slothington "Slow Convergence" McNapfaceTitle: Distinguished Professor of Continuous Optimization & Energy MinimizationAffiliation: The Lush Canopy Institute of Sluggish AlgorithmsBio:Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
GS

Gesualdo Scutari

Name: Dr. Slothington "Slow Convergence" McNapface Title: Distinguished Professor of Continuous Optimization & Energy Minimization Affiliation: The Lush Canopy Institute of Sluggish Algorithms Bio: Dr. Slothington McNapface is a leading expert in continuous optimization, specializing... Read More →
Monday July 21, 2025 4:15pm - 5:30pm PDT
Taper Hall (THH) 201 3501 Trousdale Pkwy, 201, Los Angeles, CA 90089

Log in to save this to your schedule, view media, leave feedback and see who's attending!

Share Modal

Share this link via

Or copy link