Learning Networks from Wide-Sense Stationary Stochastic Processes

📅 2024-12-04
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the problem of inferring network edge structure—specifically, recovering the support of a sparse graph Laplacian matrix (L^*)—from nodal potential observations of a wide-sense stationary stochastic process in the high-dimensional regime ((p gg n)). We propose the first convex optimization framework that combines the Whittle likelihood with (ell_1) regularization to model network dynamics governed by steady-state linear conservation laws. Theoretically, we introduce a novel incoherence condition and establish high-probability exact support recovery, along with convergence rate bounds in elementwise (ell_infty), Frobenius, and operator norms. The method is computationally tractable and statistically optimal. Empirical validation on real-world systems—including power grids, water distribution networks, and human brain functional connectivity—demonstrates its effectiveness in topology reconstruction.

Technology Category

Application Category

📝 Abstract
Complex networked systems driven by latent inputs are common in fields like neuroscience, finance, and engineering. A key inference problem here is to learn edge connectivity from node outputs (potentials). We focus on systems governed by steady-state linear conservation laws: $X_t = {L^{ast}}Y_{t}$, where $X_t, Y_t in mathbb{R}^p$ denote inputs and potentials, respectively, and the sparsity pattern of the $p imes p$ Laplacian $L^{ast}$ encodes the edge structure. Assuming $X_t$ to be a wide-sense stationary stochastic process with a known spectral density matrix, we learn the support of $L^{ast}$ from temporally correlated samples of $Y_t$ via an $ell_1$-regularized Whittle's maximum likelihood estimator (MLE). The regularization is particularly useful for learning large-scale networks in the high-dimensional setting where the network size $p$ significantly exceeds the number of samples $n$. We show that the MLE problem is strictly convex, admitting a unique solution. Under a novel mutual incoherence condition and certain sufficient conditions on $(n, p, d)$, we show that the ML estimate recovers the sparsity pattern of $L^ast$ with high probability, where $d$ is the maximum degree of the graph underlying $L^{ast}$. We provide recovery guarantees for $L^ast$ in element-wise maximum, Frobenius, and operator norms. Finally, we complement our theoretical results with several simulation studies on synthetic and benchmark datasets, including engineered systems (power and water networks), and real-world datasets from neural systems (such as the human brain).
Problem

Research questions and friction points this paper is trying to address.

Learn edge connectivity from node outputs in networked systems
Estimate Laplacian sparsity pattern using Whittle's MLE
Recover graph structure under high-dimensional settings
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses Whittle's MLE for network learning
Applies l1-regularization for sparsity
Ensures strict convexity for unique solution
🔎 Similar Papers
No similar papers found.