🤖 AI Summary
This paper addresses the problem of exactly reconstructing an original undirected graph (G) from noisy random subgraphs (“traces”): each trace is generated by vertex sampling (with vertex-wise inclusion probability (p_v)) to obtain an induced subgraph, followed by edge-level noise—either deleting each existing edge with probability (1-p_e) or flipping its presence with probability (f_e). Methodologically, the approach integrates probabilistic analysis, graph moment estimation, noise-robust modeling, and maximum-likelihood inference to achieve high-probability exact reconstruction under mild noise conditions. The key contribution is establishing, for the first time under the Erdős–Rényi random graph prior, a polynomial sample complexity upper bound of (O(p_v^{-2} p_e^{-1} log n)), revealing how structural randomness fundamentally enhances learnability; in contrast, reconstruction of arbitrary graphs requires exponential samples, underscoring the critical role of this prior.
📝 Abstract
We consider the problem of reconstructing an undirected graph <tex>$G$</tex> on <tex>$n$</tex> vertices given multiple random noisy subgraphs or “traces”. Specifically, a trace is generated by sampling each vertex with probability Pv, then taking the resulting induced subgraph on the sampled vertices, and then adding noise in the form of either a) deleting each edge in the subgraph with probability 1 - pe, or b) deleting each edge with probability fe and transforming a non-edge into an edge with probability fe. We show that, under mild assumptions on pv, pe and fe, if <tex>$G$</tex> is selected uniformly at random, then O(pe<sup>-1</sup>pv<sup>-2</sup>logn) or O((fe - 1/2) <sup>-2</sup>pv<sup>-2</sup>logn) traces suffice to reconstruct <tex>$G$</tex> with high probability. In contrast, if <tex>$G$</tex> is arbitrary, then exp (Ω (<tex>$n$</tex>)) traces are necessary even when pv = 1, Pe = 1/2.