Decision-Theoretic Robustness for Network Models

📅 2026-01-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Real-world networks exhibit sparsity, heterogeneity, and higher-order dependencies, rendering model-based decisions highly sensitive to misspecification. This work proposes a locally robust decision framework that minimizes worst-case expected loss over a Kullback–Leibler neighborhood of the posterior distribution, leveraging low-dimensional functionals derived from exchangeable graph models for optimization. It introduces decision-theoretic robustness into exchangeable graph modeling for the first time, revealing universal critical exponents governing decision uncertainty near critical phase transitions and establishing a nonparametric minimax theory for sparse graph model selection. By integrating graph limits, entropy-tilting algorithms, and variational inference, the study quantifies the mechanism of robust posterior risk inflation under squared loss and demonstrates the method’s efficacy on real-world datasets, including brain functional connectivity and social networks from Indian villages.

Technology Category

Application Category

📝 Abstract
Bayesian network models (Erdos Renyi, stochastic block models, random dot product graphs, graphons) are widely used in neuroscience, epidemiology, and the social sciences, yet real networks are sparse, heterogeneous, and exhibit higher-order dependence. How stable are network-based decisions, model selection, and policy recommendations to small model misspecification? We study local decision-theoretic robustness by allowing the posterior to vary within a small Kullback-Leibler neighborhood and choosing actions that minimize worst-case posterior expected loss. Exploiting low-dimensional functionals available under exchangeability, we (i) adapt decision-theoretic robustness to exchangeable graphs via graphon limits and derive sharp small-radius expansions of robust posterior risk; under squared loss the leading inflation is controlled by the posterior variance of the loss, and for robustness indices that diverge at percolation/fragmentation thresholds we obtain a universal critical exponent describing the explosion of decision uncertainty near criticality. (ii) Develop a nonparametric minimax theory for robust model selection between sparse Erdos-Renyi and block models, showing-via robustness error exponents-that no Bayesian or frequentist method can uniformly improve upon the decision-theoretic limits over configuration models and sparse graphon classes for percolation-type functionals. (iii) Propose a practical algorithm based on entropic tilting of posterior or variational samples, and demonstrate it on functional brain connectivity and Karnataka village social networks.
Problem

Research questions and friction points this paper is trying to address.

model misspecification
decision robustness
network models
Bayesian inference
graphon
Innovation

Methods, ideas, or system contributions that make the work stand out.

decision-theoretic robustness
graphon limits
entropic tilting
minimax theory
critical exponent
🔎 Similar Papers
No similar papers found.
Marios Papamichalis
Marios Papamichalis
Postdoctoral Associate, Yale University
StatisticsNetworksCausal InferenceDeep Learning
R
Regina Ruane
Department of Statistics and Data Science, The Wharton School at the University of Pennsylvania, 3733 Spruce Street, Philadelphia, PA 19104-6340
S
S. Lunagomez
Department of Statistics, ITAM, Mexico
S
Swati Chandna
Birkbeck University of London