🤖 AI Summary
This paper investigates theoretical guarantees for causal edge orientation identification in large-scale random graphs, focusing on false negative rate (FNR) convergence under single-variable randomized interventions and ε-intervention faithfulness—allowing for latent confounding. We propose the first dimension-adaptive, identifiability-robust causal structure learning framework for high-dimensional sparse graphs. Our analysis reveals that scale-free network topologies induce intrinsic regularization, challenging the conventional view that high-dimensional heterogeneity inherently impedes causal discovery. Leveraging sparse Erdős–Rényi and generalized Barabási–Albert directed acyclic graph models, and employing concentration inequalities, we prove that the FNR concentrates around its mean at rate O(log d/√d); in scale-free graphs with power-law exponent γ > 3, the deviation width vanishes asymptotically. Extensive simulations corroborate the theoretical prediction of rapid—indeed, vanishing—FNR convergence in high dimensions.
📝 Abstract
We investigate theoretical guarantees for the false-negative rate (FNR) -- the fraction of true causal edges whose orientation is not recovered, under single-variable random interventions and an $epsilon$-interventional faithfulness assumption that accommodates latent confounding. For sparse ErdH{o}s--R'enyi directed acyclic graphs, where the edge probability scales as $p_e = Theta(1/d)$, we show that the FNR concentrates around its mean at rate $O(frac{log d}{sqrt d})$, implying that large deviations above the expected error become exponentially unlikely as dimensionality increases. This concentration ensures that derived upper bounds hold with high probability in large-scale settings. Extending the analysis to generalized Barab'asi--Albert graphs reveals an even stronger phenomenon: when the degree exponent satisfies $gamma>3$, the deviation width scales as $O(d^{eta - frac{1}{2}})$ with $eta = 1/(gamma - 1)<frac{1}{2}$, and hence vanishes in the limit. This demonstrates that realistic scale-free topologies intrinsically regularize causal discovery, reducing variability in orientation error. These finite-dimension results provide the first dimension-adaptive, faithfulness-robust guarantees for causal structure recovery, and challenge the intuition that high dimensionality and network heterogeneity necessarily hinder accurate discovery. Our simulation results corroborate these theoretical predictions, showing that the FNR indeed concentrates and often vanishes in practice as dimensionality grows.