🤖 AI Summary
Parameter inference in generative network growth models is hindered by intractable likelihoods. Method: We theoretically establish and empirically validate the *intrinsic locality* of model parameter information—demonstrating that high-accuracy global parameter estimation requires only small subgraphs. Leveraging this insight, we propose a neural density estimator (NDE) driven by a graph neural network (GNN) with restricted receptive fields, enabling efficient, likelihood-free Bayesian posterior approximation. Our approach integrates subgraph-level statistical feature extraction with a lightweight GNN architecture. Results: Evaluated on nine canonical growth models, our method achieves strong agreement between localized subgraph predictions and full-graph NDE inference; it further delivers high-fidelity posterior estimates even for non-local models, at significantly lower computational cost than conventional likelihood-based or simulation-based methods. This work provides both theoretical grounding and a practical paradigm for subgraph-based analysis and lightweight GNNs in likelihood-free inference.
📝 Abstract
Mechanistic network models can capture salient characteristics of empirical networks using a small set of domain-specific, interpretable mechanisms. Yet inference remains challenging because the likelihood is often intractable. We show that, for a broad class of growing network models, information about model parameters is localized in the network, i.e., the likelihood can be expressed in terms of small subgraphs. We take a Bayesian perspective to inference and develop neural density estimators (NDEs) to approximate the posterior distribution of model parameters using graph neural networks (GNNs) with limited receptive size, i.e., the GNN can only "see" small subgraphs. We characterize nine growing network models in terms of their localization and demonstrate that localization predictions agree with NDEs on simulated data. Even for non-localized models, NDEs can infer high-fidelity posteriors matching model-specific inference methods at a fraction of the cost. Our findings establish information localization as a fundamental property of network growth, theoretically justifying the analysis of local subgraphs embedded in larger, unobserved networks and the use of GNNs with limited receptive field for likelihood-free inference.