GRID: Protecting Training Graph from Link Stealing Attacks on GNN Models

📅 2025-01-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Graph Neural Networks (GNNs) are vulnerable to link stealing attacks, which compromise training graph topology and threaten data privacy. To address this, we propose GRID (Graph Link-level Identity-masking via topological Denoising), a link-level privacy-preserving framework achieving zero-accuracy degradation. GRID injects topology-aware noise into core node embeddings to reduce adjacency-induced similarity to the level of non-adjacent nodes. It is the first method to rigorously decouple link privacy from GNN utility. We design a graph-covering-based mechanism for selecting robust core nodes—balancing noise resilience, low distortion, and computational efficiency—and formalize link-level differential privacy with provable utility guarantees. Evaluated on five benchmark datasets, GRID comprehensively defends against both inductive and transductive link stealing, as well as influence-based attacks. It achieves superior privacy–utility trade-offs over state-of-the-art methods while maintaining 100% prediction accuracy.

Technology Category

Application Category

📝 Abstract
Graph neural networks (GNNs) have exhibited superior performance in various classification tasks on graph-structured data. However, they encounter the potential vulnerability from the link stealing attacks, which can infer the presence of a link between two nodes via measuring the similarity of its incident nodes' prediction vectors produced by a GNN model. Such attacks pose severe security and privacy threats to the training graph used in GNN models. In this work, we propose a novel solution, called Graph Link Disguise (GRID), to defend against link stealing attacks with the formal guarantee of GNN model utility for retaining prediction accuracy. The key idea of GRID is to add carefully crafted noises to the nodes' prediction vectors for disguising adjacent nodes as n-hop indirect neighboring nodes. We take into account the graph topology and select only a subset of nodes (called core nodes) covering all links for adding noises, which can avert the noises offset and have the further advantages of reducing both the distortion loss and the computation cost. Our crafted noises can ensure 1) the noisy prediction vectors of any two adjacent nodes have their similarity level like that of two non-adjacent nodes and 2) the model prediction is unchanged to ensure zero utility loss. Extensive experiments on five datasets are conducted to show the effectiveness of our proposed GRID solution against different representative link-stealing attacks under transductive settings and inductive settings respectively, as well as two influence-based attacks. Meanwhile, it achieves a much better privacy-utility trade-off than existing methods when extended to GNNs.
Problem

Research questions and friction points this paper is trying to address.

Graph Neural Networks
Link Leakage Attack
Data Privacy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Graph Link Disguise
Privacy Protection
Link Stealing Defense
🔎 Similar Papers
No similar papers found.