Learning-based Privacy-Preserving Graph Publishing Against Sensitive Link Inference Attacks

📅 2025-07-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the privacy risk of sensitive link inference in graph data publishing, this paper proposes PPGSL—the first learning-based privacy-preserving graph structure learning framework. PPGSL models the graph structure as learnable parameters and employs an adversarial training paradigm involving a surrogate attacker and defender, integrated with gradient masking, parameterized graph generation, and a secure iterative protocol for end-to-end optimization. Its core innovation lies in introducing learnable graph structure optimization into privacy-preserving graph publishing—simultaneously ensuring structural utility and rigorous link-level privacy guarantees. Extensive experiments on multiple real-world graph datasets demonstrate that PPGSL significantly outperforms existing methods, achieving state-of-the-art privacy–utility trade-offs while robustly defending against diverse sensitive link inference attacks.

Technology Category

Application Category

📝 Abstract
Publishing graph data is widely desired to enable a variety of structural analyses and downstream tasks. However, it also potentially poses severe privacy leakage, as attackers may leverage the released graph data to launch attacks and precisely infer private information such as the existence of hidden sensitive links in the graph. Prior studies on privacy-preserving graph data publishing relied on heuristic graph modification strategies and it is difficult to determine the graph with the optimal privacy--utility trade-off for publishing. In contrast, we propose the first privacy-preserving graph structure learning framework against sensitive link inference attacks, named PPGSL, which can automatically learn a graph with the optimal privacy--utility trade-off. The PPGSL operates by first simulating a powerful surrogate attacker conducting sensitive link attacks on a given graph. It then trains a parameterized graph to defend against the simulated adversarial attacks while maintaining the favorable utility of the original graph. To learn the parameters of both parts of the PPGSL, we introduce a secure iterative training protocol. It can enhance privacy preservation and ensure stable convergence during the training process, as supported by the theoretical proof. Additionally, we incorporate multiple acceleration techniques to improve the efficiency of the PPGSL in handling large-scale graphs. The experimental results confirm that the PPGSL achieves state-of-the-art privacy--utility trade-off performance and effectively thwarts various sensitive link inference attacks.
Problem

Research questions and friction points this paper is trying to address.

Prevent sensitive link inference in graph data publishing
Optimize privacy--utility trade-off in graph structure learning
Defend against adversarial attacks while maintaining graph utility
Innovation

Methods, ideas, or system contributions that make the work stand out.

Learning-based framework for privacy-preserving graph publishing
Secure iterative training protocol for stable convergence
Acceleration techniques for large-scale graph efficiency
🔎 Similar Papers
No similar papers found.
Y
Yucheng Wu
Key Lab of High Confidence Software Technologies, Peking University, Ministry of Education, Beijing 100871, China, and the School of Computer Science, Peking University, Beijing 100871, China
Yuncong Yang
Yuncong Yang
University of Massachusetts Amherst
Artificial IntelligenceComputer VisionRobotics
X
Xiao Han
Key Laboratory of Data Intelligence and Management, Beihang University, Ministry of Industry and Information Technology, Beijing 100191, China, and the School of Economics and Management, Beihang University, Beijing 100191, China
Leye Wang
Leye Wang
Tenured Associate Professor, Peking University
Ubiquitous ComputingUrban ComputingCrowdsensingFederated Learning
Junjie Wu
Junjie Wu
Center for High Pressure Science & Technology Advanced Research
Physics