BLAST: A Stealthy Backdoor Leverage Attack against Cooperative Multi-Agent Deep Reinforcement Learning based Systems

📅 2025-01-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the vulnerability of cooperative multi-agent deep reinforcement learning (c-MADRL) systems to backdoor attacks—where existing methods suffer from poor stealth, reliance on global manipulation, or require auxiliary components—we propose a **single-agent leverage-based backdoor attack**. This approach achieves system-wide behavioral control by poisoning only one agent. We introduce a novel **spatiotemporal adversarial behavior pattern** as a dynamic, context-aware trigger and enable system-level compromise via **unilateral reward function hijacking**. The attack is compatible with mainstream c-MADRL algorithms—including VDN, QMIX, and MAPPO—and achieves high success rates across all three in both SMAC and Pursuit benchmarks. Crucially, it degrades clean-task performance by less than 3%, while evading two representative defense mechanisms. The method thus offers strong stealth, broad generalizability across algorithms and environments, and minimal intrusion—requiring no modifications to other agents or the environment.

Technology Category

Application Category

📝 Abstract
Recent studies have shown that cooperative multi-agent deep reinforcement learning (c-MADRL) is under the threat of backdoor attacks. Once a backdoor trigger is observed, it will perform malicious actions leading to failures or malicious goals. However, existing backdoor attacks suffer from several issues, e.g., instant trigger patterns lack stealthiness, the backdoor is trained or activated by an additional network, or all agents are backdoored. To this end, in this paper, we propose a novel backdoor leverage attack against c-MADRL, BLAST, which attacks the entire multi-agent team by embedding the backdoor only in a single agent. Firstly, we introduce adversary spatiotemporal behavior patterns as the backdoor trigger rather than manual-injected fixed visual patterns or instant status and control the period to perform malicious actions. This method can guarantee the stealthiness and practicality of BLAST. Secondly, we hack the original reward function of the backdoor agent via unilateral guidance to inject BLAST, so as to achieve the extit{leverage attack effect} that can pry open the entire multi-agent system via a single backdoor agent. We evaluate our BLAST against 3 classic c-MADRL algorithms (VDN, QMIX, and MAPPO) in 2 popular c-MADRL environments (SMAC and Pursuit), and 2 existing defense mechanisms. The experimental results demonstrate that BLAST can achieve a high attack success rate while maintaining a low clean performance variance rate.
Problem

Research questions and friction points this paper is trying to address.

Collaborative Multi-Agent Deep Reinforcement Learning
Backdoor Attacks
Adversarial Manipulation
Innovation

Methods, ideas, or system contributions that make the work stand out.

BLAST
Collaborative Multi-Agent Deep Reinforcement Learning
Stealth Behavior Patterns
🔎 Similar Papers
No similar papers found.
Yinbo Yu
Yinbo Yu
Nanjing University of Aeronautics and Astronautics
Network/Software SecurityAI securityIoT
S
Saihao Yan
School of Cybersecurity, Northwestern Polytechnical University, Xi’an, Shaanxi, 710072, P.R.China
X
Xueyu Yin
School of Cybersecurity, Northwestern Polytechnical University, Xi’an, Shaanxi, 710072, P.R.China
Jing Fang
Jing Fang
Northwestern Polytechnical University
Image ProcessingDeep Learning
Jiajia Liu
Jiajia Liu
Ant Group
cv multimodal