BadMoE: Backdooring Mixture-of-Experts LLMs via Optimizing Routing Triggers and Infecting Dormant Experts

📅 2025-04-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work introduces the first stealthy backdoor attack specifically designed for Mixture-of-Experts (MoE) large language models. Existing backdoor attacks fail to exploit the unique routing dynamics of MoE architectures, particularly the underutilized “dormant experts.” Method: We propose a routing-aware loss function and an expert-role elevation mechanism that jointly identify dormant experts, perform adversarial data poisoning, and analyze MoE routing behavior—provably transforming dormant experts into dominant ones during inference via optimized routing triggers. Results: Evaluated on state-of-the-art MoE-LLMs (e.g., Mixtral), our attack achieves >92% success rate while degrading task performance by <1.5% and evading existing backdoor detection methods. Contribution: We uncover and rigorously characterize a critical security vulnerability in MoE models—the exploitability of dormant experts—and demonstrate fine-grained, highly stealthy backdoor injection that operates at the expert level without perturbing model weights or token embeddings.

Technology Category

Application Category

📝 Abstract
Mixture-of-Experts (MoE) have emerged as a powerful architecture for large language models (LLMs), enabling efficient scaling of model capacity while maintaining manageable computational costs. The key advantage lies in their ability to route different tokens to different ``expert'' networks within the model, enabling specialization and efficient handling of diverse input. However, the vulnerabilities of MoE-based LLMs still have barely been studied, and the potential for backdoor attacks in this context remains largely unexplored. This paper presents the first backdoor attack against MoE-based LLMs where the attackers poison ``dormant experts'' (i.e., underutilized experts) and activate them by optimizing routing triggers, thereby gaining control over the model's output. We first rigorously prove the existence of a few ``dominating experts'' in MoE models, whose outputs can determine the overall MoE's output. We also show that dormant experts can serve as dominating experts to manipulate model predictions. Accordingly, our attack, namely extsc{BadMoE}, exploits the unique architecture of MoE models by 1) identifying dormant experts unrelated to the target task, 2) constructing a routing-aware loss to optimize the activation triggers of these experts, and 3) promoting dormant experts to dominating roles via poisoned training data.
Problem

Research questions and friction points this paper is trying to address.

Explores vulnerabilities in MoE-based LLMs to backdoor attacks
Demonstrates dormant experts can be hijacked to control model outputs
Proposes BadMoE attack via trigger optimization and expert poisoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Poison dormant experts via optimized routing triggers
Construct routing-aware loss for expert activation
Promote dormant experts to dominating roles
🔎 Similar Papers
No similar papers found.