🤖 AI Summary
This work addresses the underexplored privacy risks in graph generative diffusion models, which are susceptible to leakage when synthesizing complex graph structures. For the first time, it systematically evaluates three types of privacy threats—graph reconstruction, attribute inference, and membership inference—through black-box inference attacks. The effectiveness of these attacks is validated across three state-of-the-art graph diffusion models and six real-world graph datasets, consistently outperforming existing baselines. To mitigate these vulnerabilities, the paper proposes two novel defense mechanisms that significantly suppress privacy leakage while preserving the generative performance of the models, thereby achieving a strong balance between privacy protection and utility.
📝 Abstract
Graph generative diffusion models have recently emerged as a powerful paradigm for generating complex graph structures, effectively capturing intricate dependencies and relationships within graph data. However, the privacy risks associated with these models remain largely unexplored. In this paper, we investigate information leakage in such models through three types of black-box inference attacks. First, we design a graph reconstruction attack, which can reconstruct graphs structurally similar to those training graphs from the generated graphs. Second, we propose a property inference attack to infer the properties of the training graphs, such as the average graph density and the distribution of densities, from the generated graphs. Third, we develop two membership inference attacks to determine whether a given graph is present in the training set. Extensive experiments on three different types of graph generative diffusion models and six real-world graphs demonstrate the effectiveness of these attacks, significantly outperforming the baseline approaches. Finally, we propose two defense mechanisms that mitigate these inference attacks and achieve a better trade-off between defense strength and target model utility than existing methods. Our code is available at https://zenodo.org/records/17946102.