🤖 AI Summary
Addressing the challenge of simultaneously preserving network structure and ensuring rigorous privacy guarantees under node-level differential privacy, this paper proposes the first node-level differentially private mechanism supporting full-network release. Our method builds upon a generalized latent space model and introduces a privacy-preserving graph generation framework: node-level privacy is achieved by perturbing latent representations, and we theoretically prove that the released graph is asymptotically identically distributed to the original graph. Extensive experiments on diverse real-world and synthetic networks demonstrate that our approach significantly outperforms existing query-based or highly structure-distorting methods across key structural metrics—including degree distribution, clustering coefficient, and connectivity—while strictly satisfying ε-node-level differential privacy.
📝 Abstract
Differential privacy is a well-established framework for safeguarding sensitive information in data. While extensively applied across various domains, its application to network data -- particularly at the node level -- remains underexplored. Existing methods for node-level privacy either focus exclusively on query-based approaches, which restrict output to pre-specified network statistics, or fail to preserve key structural properties of the network. In this work, we propose GRAND (Graph Release with Assured Node Differential privacy), which is, to the best of our knowledge, the first network release mechanism that releases entire networks while ensuring node-level differential privacy and preserving structural properties. Under a broad class of latent space models, we show that the released network asymptotically follows the same distribution as the original network. The effectiveness of the approach is evaluated through extensive experiments on both synthetic and real-world datasets.