🤖 AI Summary
Existing graph classification backdoor attacks suffer from poor stealth due to dual biases—structural bias induced by rare subgraph triggers and semantic bias caused by label flipping—making them easily detectable by anomaly detection methods.
Method: This paper identifies this dual-bias phenomenon as the root cause of insufficient stealth and proposes a clean-label, distribution-preserving backdoor attack framework. It introduces an anomaly-aware discriminator to constrain the trigger distribution, integrated with adversarial training and subgraph generation to jointly optimize stealth and attack efficacy.
Contribution/Results: The method achieves high attack success rates without altering original labels or introducing conspicuous structural or semantic anomalies, significantly reducing detectability by anomaly detectors. Extensive experiments on multiple real-world graph datasets demonstrate that our approach outperforms state-of-the-art methods in both stealth and effectiveness.
📝 Abstract
Graph Neural Networks (GNNs) have demonstrated strong performance across tasks such as node classification, link prediction, and graph classification, but remain vulnerable to backdoor attacks that implant imperceptible triggers during training to control predictions. While node-level attacks exploit local message passing, graph-level attacks face the harder challenge of manipulating global representations while maintaining stealth. We identify two main sources of anomaly in existing graph classification backdoor methods: structural deviation from rare subgraph triggers and semantic deviation caused by label flipping, both of which make poisoned graphs easily detectable by anomaly detection models. To address this, we propose DPSBA, a clean-label backdoor framework that learns in-distribution triggers via adversarial training guided by anomaly-aware discriminators. DPSBA effectively suppresses both structural and semantic anomalies, achieving high attack success while significantly improving stealth. Extensive experiments on real-world datasets validate that DPSBA achieves a superior balance between effectiveness and detectability compared to state-of-the-art baselines.