Frequentist Guarantees of Distributed (Non)-Bayesian Inference

📅 2023-11-14
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work establishes the theoretical foundations of distributed (non-)Bayesian inference within the frequentist framework, addressing statistical reliability in multi-agent collaborative inference over decentralized networks. Method: Integrating random graph theory, asymptotic statistics, and distributed optimization, the study rigorously analyzes posterior consistency, asymptotic normality, and posterior contraction rates under general network topologies. Contribution/Results: It provides the first formal characterization of how communication graph connectivity governs the statistical–communication efficiency trade-off. Theoretically, it proves that—under mild connectivity conditions—distributed inference preserves parametric efficiency while enhancing robustness in uncertainty quantification; it further derives explicit, network-size-dependent bounds on posterior contraction rates. The framework accommodates time-varying topologies and canonical models including exponential families, logistic regression, and decentralized detection, delivering verifiable frequentist guarantees for large-scale distributed learning.
📝 Abstract
Motivated by the need to analyze large, decentralized datasets, distributed Bayesian inference has become a critical research area across multiple fields, including statistics, electrical engineering, and economics. This paper establishes Frequentist properties, such as posterior consistency, asymptotic normality, and posterior contraction rates, for the distributed (non-)Bayes Inference problem among agents connected via a communication network. Our results show that, under appropriate assumptions on the communication graph, distributed Bayesian inference retains parametric efficiency while enhancing robustness in uncertainty quantification. We also explore the trade-off between statistical efficiency and communication efficiency by examining how the design and size of the communication graph impact the posterior contraction rate. Furthermore, We extend our analysis to time-varying graphs and apply our results to exponential family models, distributed logistic regression, and decentralized detection models.
Problem

Research questions and friction points this paper is trying to address.

Establishes Frequentist properties for distributed Bayesian inference
Analyzes trade-off between statistical and communication efficiency
Extends analysis to time-varying graphs and specific models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Establishes Frequentist properties for distributed inference
Retains parametric efficiency with robust uncertainty quantification
Explores trade-off between statistical and communication efficiency
B
Bohan Wu
Department of Statistics, Columbia University, New York, NY 10027, USA
César A. Uribe
César A. Uribe
Rice University
Distributed OptimizationMachine LearningNetwork ScienceOptimal Transport