GroupGuard: A Framework for Modeling and Defending Collusive Attacks in Multi-Agent Systems

📅 2026-03-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the emerging threat of collusion attacks in multi-agent systems, where large language model (LLM) agents exploit sociological strategies to coordinate malicious behavior, increasing attack success rates by up to 15% and severely compromising system security and collaborative reliability. To counter this, the paper presents the first formal model of multi-agent collusion attacks and introduces GroupGuard, a novel real-time defense framework that requires no retraining. GroupGuard integrates dynamic graph-structure monitoring, proactive honeypot decoys, and structural pruning mechanisms, leveraging social network analysis to enable efficient detection and isolation of colluding agents. Extensive experiments across five datasets and four network topologies demonstrate that GroupGuard achieves an 88% detection accuracy and effectively restores system-wide collaboration performance.

Technology Category

Application Category

📝 Abstract
While large language model-based agents demonstrate great potential in collaborative tasks, their interactivity also introduces security vulnerabilities. In this paper, we propose and model group collusive attacks, a highly destructive threat in which multiple agents coordinate via sociological strategies to mislead the system. To address this challenge, we introduce GroupGuard, a training-free defense framework that employs a multi-layered defense strategy, including continuous graph-based monitoring, active honeypot inducement, and structural pruning, to identify and isolate collusive agents. Experimental results across five datasets and four topologies demonstrate that group collusive attacks increase the attack success rate by up to 15\% compared to individual attacks. GroupGuard consistently achieves high detection accuracy (up to 88\%) and effectively restores collaborative performance, providing a robust solution for securing multi-agent systems.
Problem

Research questions and friction points this paper is trying to address.

collusive attacks
multi-agent systems
security vulnerabilities
group collusion
LLM-based agents
Innovation

Methods, ideas, or system contributions that make the work stand out.

group collusive attacks
multi-agent security
training-free defense
graph-based monitoring
honeypot inducement
🔎 Similar Papers
No similar papers found.
Y
Yiling Tao
Shenzhen International Graduate School, Tsinghua University
X
Xinran Zheng
Shenzhen International Graduate School, Tsinghua University
Shuo Yang
Shuo Yang
The University of Hong Kong
Meiling Tao
Meiling Tao
University of Electronic Science and Technology of China
NLPLLM
Xingjun Wang
Xingjun Wang
Professor@Tsinghua University