🤖 AI Summary
Existing black-box adversarial attack methods against multi-agent systems (MAS) rely on white-box knowledge, adopt fixed targets, and struggle to balance stealthiness with effectiveness. Method: This paper proposes an adaptive adversarial attack framework that dynamically selects victim agents and malicious behavioral policies without requiring internal system knowledge, thereby achieving both high stealth and strong attack capability. It innovatively integrates generative adversarial imitation learning to construct a surrogate model, synergizing white-box perturbation generation with black-box environmental interaction to induce agents to execute malicious actions. Results: Evaluated across eight standard MAS benchmarks, the proposed method consistently outperforms four baseline approaches, achieving state-of-the-art performance in attack success rate, perturbation minimization, and evasion of detection—thereby enabling more rigorous and practical security assessment of MAS.
📝 Abstract
Evaluating security and reliability for multi-agent systems (MAS) is urgent as they become increasingly prevalent in various applications. As an evaluation technique, existing adversarial attack frameworks face certain limitations, e.g., impracticality due to the requirement of white-box information or high control authority, and a lack of stealthiness or effectiveness as they often target all agents or specific fixed agents. To address these issues, we propose AdapAM, a novel framework for adversarial attacks on black-box MAS. AdapAM incorporates two key components: (1) Adaptive Selection Policy simultaneously selects the victim and determines the anticipated malicious action (the action would lead to the worst impact on MAS), balancing effectiveness and stealthiness. (2) Proxy-based Perturbation to Induce Malicious Action utilizes generative adversarial imitation learning to approximate the target MAS, allowing AdapAM to generate perturbed observations using white-box information and thus induce victims to execute malicious action in black-box settings. We evaluate AdapAM across eight multi-agent environments and compare it with four state-of-the-art and commonly-used baselines. Results demonstrate that AdapAM achieves the best attack performance in different perturbation rates. Besides, AdapAM-generated perturbations are the least noisy and hardest to detect, emphasizing the stealthiness.