🤖 AI Summary
This paper addresses Bayesian predictive synthesis for random network graphs at the graphon level. Methodologically, it introduces the first graphon-space Bayesian predictive synthesis framework, which aggregates predictive distributions from multiple proxy graphons via weighted fusion—equivalently realizing the L²-projection of the true graphon onto the subspace spanned by the proxies. We establish a non-asymptotic oracle inequality, proving its “combination-outperforms-individual” statistical advantage. Innovatively integrating graphon theory, KL-moment calibration, and Lipschitz stability analysis, the framework ensures controlled estimation error for key graph statistics—including edge density, degree distribution, and subgraph densities. It preserves model closure under heavy-tailed degree distributions and the exponential random graph model (ERGM) family, and achieves the L² minimax optimal convergence rate.
📝 Abstract
Bayesian predictive synthesis provides a coherent Bayesian framework for combining multiple predictive distributions, or agents, into a single updated prediction, extending Bayesian model averaging to allow general pooling of full predictive densities. This paper develops a static, graphon level version of Bayesian predictive synthesis for random networks. At the graphon level we show that Bayesian predictive synthesis corresponds to the integrated squared error projection of the true graphon onto the linear span of the agent graphons. We derive nonasymptotic oracle inequalities and prove that least-squares graphon-BPS, based on a finite number of edge observations, achieves the minimax L^2 rate over this agent span. Moreover, we show that any estimator that selects a single agent graphon is uniformly inconsistent on a nontrivial subset of the convex hull of the agents, whereas graphon-level Bayesian predictive synthesis remains minimax-rate optimal-formalizing a combination beats components phenomenon. Structural properties of the underlying random graphs are controlled through explicit Lipschitz bounds that transfer graphon error into error for edge density, degree distributions, subgraph densities, clustering coefficients, and giant component phase transitions. Finally, we develop a heavy tail theory for Bayesian predictive synthesis, showing how mixtures and entropic tilts preserve regularly varying degree distributions and how exponential random graph model agents remain within their family under log linear tilting with Kullback-Leibler optimal moment calibration.