Graph Neural Backdoor: Fundamentals, Methodologies, Applications, and Future Directions

📅 2024-06-15
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Graph Neural Networks (GNNs) are vulnerable to backdoor attacks, posing severe security risks to critical applications such as recommender systems, molecular property prediction, and social network analysis. To address this emerging threat, this paper presents the first comprehensive survey framework for GNN backdoor attacks and defenses. We propose a unified taxonomy covering trigger design, poisoning strategies, and defense paradigms; systematically integrate attack methods—including data poisoning and topology perturbation—and defense techniques—such as robust training and graph purification—grounded in graph-structural properties; and construct a domain knowledge graph that exposes key limitations of existing approaches in verifiability, interpretability, and cross-graph generalizability. Our work establishes the first benchmarking framework for GNN backdoor security research and identifies three concrete future directions. It provides both theoretical foundations and practical guidelines for building trustworthy, secure GNN systems.

Technology Category

Application Category

📝 Abstract
Graph Neural Networks (GNNs) have significantly advanced various downstream graph-relevant tasks, encompassing recommender systems, molecular structure prediction, social media analysis, etc. Despite the boosts of GNN, recent research has empirically demonstrated its potential vulnerability to backdoor attacks, wherein adversaries employ triggers to poison input samples, inducing GNN to adversary-premeditated malicious outputs. This is typically due to the controlled training process, or the deployment of untrusted models, such as delegating model training to third-party service, leveraging external training sets, and employing pre-trained models from online sources. Although there's an ongoing increase in research on GNN backdoors, comprehensive investigation into this field is lacking. To bridge this gap, we propose the first survey dedicated to GNN backdoors. We begin by outlining the fundamental definition of GNN, followed by the detailed summarization and categorization of current GNN backdoor attacks and defenses based on their technical characteristics and application scenarios. Subsequently, the analysis of the applicability and use cases of GNN backdoors is undertaken. Finally, the exploration of potential research directions of GNN backdoors is presented. This survey aims to explore the principles of graph backdoors, provide insights to defenders, and promote future security research.
Problem

Research questions and friction points this paper is trying to address.

Graph Neural Networks
Backdoor Attacks
Adversarial Robustness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Graph Neural Networks
Backdoor Attacks
Security Defense
🔎 Similar Papers
No similar papers found.