Backdoor or Manipulation? Graph Mixture of Experts Can Defend Against Various Graph Adversarial Attacks

📅 2025-10-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing graph neural network (GNN) defense methods typically target only a single adversarial threat—such as backdoor attacks, edge perturbations, or node injection—lacking the capability to jointly mitigate multiple, heterogeneous graph attacks. Method: We propose GraphMoE, the first unified defense framework for GNNs, built upon a Mixture-of-Experts (MoE) architecture. It introduces a mutual information–driven logical diversity loss to encourage experts to learn distinct neighborhood representations, and a robustness-aware routing mechanism that dynamically assigns perturbed nodes to the most resilient expert. The entire framework is optimized end-to-end via adversarial training. Contribution/Results: Extensive experiments across diverse attack scenarios demonstrate that GraphMoE significantly outperforms state-of-the-art defenses, achieving both high clean-data classification accuracy and substantially improved robustness against multiple concurrent adversarial threats. To our knowledge, it is the first framework enabling unified, synergistic defense against heterogeneous graph adversarial attacks within a single architecture.

Technology Category

Application Category

📝 Abstract
Extensive research has highlighted the vulnerability of graph neural networks (GNNs) to adversarial attacks, including manipulation, node injection, and the recently emerging threat of backdoor attacks. However, existing defenses typically focus on a single type of attack, lacking a unified approach to simultaneously defend against multiple threats. In this work, we leverage the flexibility of the Mixture of Experts (MoE) architecture to design a scalable and unified framework for defending against backdoor, edge manipulation, and node injection attacks. Specifically, we propose an MI-based logic diversity loss to encourage individual experts to focus on distinct neighborhood structures in their decision processes, thus ensuring a sufficient subset of experts remains unaffected under perturbations in local structures. Moreover, we introduce a robustness-aware router that identifies perturbation patterns and adaptively routes perturbed nodes to corresponding robust experts. Extensive experiments conducted under various adversarial settings demonstrate that our method consistently achieves superior robustness against multiple graph adversarial attacks.
Problem

Research questions and friction points this paper is trying to address.

Defending graph neural networks against multiple adversarial attack types
Developing unified framework for backdoor, manipulation and injection attacks
Ensuring expert diversity to maintain robustness under structural perturbations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mixture of Experts architecture for unified defense
Logic diversity loss for distinct neighborhood focus
Robustness-aware router adaptively handles perturbations
🔎 Similar Papers
No similar papers found.
Y
Yuyuan Feng
Xiamen University, The Hong Kong University of Science and Technology (Guangzhou)
B
Bin Ma
The Hong Kong University of Science and Technology (Guangzhou)
Enyan Dai
Enyan Dai
Assistant Professor at the HKUST(GZ)
Machine LearningData MiningTrustworthy AIGraph Mining