GraphToxin: Reconstructing Full Unlearned Graphs from Graph Unlearning

📅 2025-11-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work exposes a critical privacy vulnerability in graph unlearning: sensitive nodes, edges, and their associated attributes—though formally deleted—can be reconstructed from residual model information, undermining regulatory compliance. To this end, we propose the first reconstruction attack framework tailored to graph unlearning, featuring a novel curvature-matching module that jointly leverages gradient signals and structural priors to enable fine-grained, full-graph recovery under both white-box and black-box settings. We establish a comprehensive evaluation protocol covering random and worst-case deletion scenarios, systematically assessing mainstream unlearning defenses—and revealing their widespread fragility, with some methods even exacerbating leakage. Experiments demonstrate that our attack efficiently recovers individual identities and their sensitive relational attributes, exposing a fundamental deficiency in current graph unlearning techniques’ privacy guarantees and underscoring the urgent need for more robust defense paradigms.

Technology Category

Application Category

📝 Abstract
Graph unlearning has emerged as a promising solution for complying with"the right to be forgotten"regulations by enabling the removal of sensitive information upon request. However, this solution is not foolproof. The involvement of multiple parties creates new attack surfaces, and residual traces of deleted data can still remain in the unlearned graph neural networks. These vulnerabilities can be exploited by attackers to recover the supposedly erased samples, thereby undermining the inherent functionality of graph unlearning. In this work, we propose GraphToxin, the first graph reconstruction attack against graph unlearning. Specifically, we introduce a novel curvature matching module to provide a fine-grained guidance for full unlearned graph recovery. We demonstrate that GraphToxin can successfully subvert the regulatory guarantees expected from graph unlearning - it can recover not only a deleted individual's information and personal links but also sensitive content from their connections, thereby posing substantially more detrimental threats. Furthermore, we extend GraphToxin to multiple node removals under both white-box and black-box setting. We highlight the necessity of a worst-case analysis and propose a comprehensive evaluation framework to systematically assess the attack performance under both random and worst-case node removals. This provides a more robust and realistic measure of the vulnerability of graph unlearning methods to graph reconstruction attacks. Our extensive experiments demonstrate the effectiveness and flexibility of GraphToxin. Notably, we show that existing defense mechanisms are largely ineffective against this attack and, in some cases, can even amplify its performance. Given the severe privacy risks posed by GraphToxin, our work underscores the urgent need for the development of more effective and robust defense strategies against this attack.
Problem

Research questions and friction points this paper is trying to address.

Recovering supposedly erased data from graph unlearning systems
Exploiting vulnerabilities to reconstruct deleted nodes and links
Demonstrating existing defenses fail against reconstruction attacks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Curvature matching module guides graph recovery
Attacks graph unlearning in white-box and black-box settings
Recovers deleted information and sensitive connection content
🔎 Similar Papers
No similar papers found.