🤖 AI Summary
This work reveals that legitimate graph unlearning mechanisms in Graph Neural Networks (GNNs) can be weaponized as a covert attack surface, leading to significant performance degradation. We introduce the "unlearning poisoning attack," wherein an adversary injects carefully crafted nodes and subsequently issues valid deletion requests; the model’s mandatory compliance with such unlearning operations—despite being black-box—effectively reduces accuracy. To formalize this threat, we frame the attack as a bilevel optimization problem, leveraging gradient-based unlearning approximations and pseudo-labels from a surrogate model to circumvent label scarcity and black-box constraints. Extensive experiments demonstrate that only a small number of strategically designed deletion requests suffice to induce substantial performance drops across multiple state-of-the-art graph unlearning algorithms and benchmark datasets, thereby exposing critical security vulnerabilities in current GNN unlearning frameworks.
📝 Abstract
Graph neural networks (GNNs) are widely used for learning from graph-structured data in domains such as social networks, recommender systems, and financial platforms. To comply with privacy regulations like the GDPR, CCPA, and PIPEDA, approximate graph unlearning, which aims to remove the influence of specific data points from trained models without full retraining, has become an increasingly important component of trustworthy graph learning. However, approximate unlearning often incurs subtle performance degradation, which may incur negative and unintended side effects. In this work, we show that such degradations can be amplified into adversarial attacks. We introduce the notion of \textbf{unlearning corruption attacks}, where an adversary injects carefully chosen nodes into the training graph and later requests their deletion. Because deletion requests are legally mandated and cannot be denied, this attack surface is both unavoidable and stealthy: the model performs normally during training, but accuracy collapses only after unlearning is applied. Technically, we formulate this attack as a bi-level optimization problem: to overcome the challenges of black-box unlearning and label scarcity, we approximate the unlearning process via gradient-based updates and employ a surrogate model to generate pseudo-labels for the optimization. Extensive experiments across benchmarks and unlearning algorithms demonstrate that small, carefully designed unlearning requests can induce significant accuracy degradation, raising urgent concerns about the robustness of GNN unlearning under real-world regulatory demands. The source code will be released upon paper acceptance.