Evo-MARL: Co-Evolutionary Multi-Agent Reinforcement Learning for Internalized Safety

📅 2025-08-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the vulnerabilities of external security agents in multi-agent systems (MAS)—including susceptibility to attacks, single-point failure risks, and high scalability costs—this paper proposes a security capability endogenization approach: deeply embedding defensive mechanisms within each task agent to unify execution and protection. We innovatively integrate evolutionary search with parameter-sharing multi-agent reinforcement learning to jointly co-evolve attack and defense strategies, enabling agents to autonomously detect and resist jailbreaking and adversarial attacks. A continuous adversarial training framework facilitates self-optimization of security mechanisms. Experimental results demonstrate a 22% reduction in attack success rate and a 5% improvement in inference accuracy, significantly enhancing system robustness and task performance. Our approach establishes a scalable, decentralized security architecture paradigm for MAS, eliminating reliance on centralized, externally managed defenses.

Technology Category

Application Category

📝 Abstract
Multi-agent systems (MAS) built on multimodal large language models exhibit strong collaboration and performance. However, their growing openness and interaction complexity pose serious risks, notably jailbreak and adversarial attacks. Existing defenses typically rely on external guard modules, such as dedicated safety agents, to handle unsafe behaviors. Unfortunately, this paradigm faces two challenges: (1) standalone agents offer limited protection, and (2) their independence leads to single-point failure-if compromised, system-wide safety collapses. Naively increasing the number of guard agents further raises cost and complexity. To address these challenges, we propose Evo-MARL, a novel multi-agent reinforcement learning (MARL) framework that enables all task agents to jointly acquire defensive capabilities. Rather than relying on external safety modules, Evo-MARL trains each agent to simultaneously perform its primary function and resist adversarial threats, ensuring robustness without increasing system overhead or single-node failure. Furthermore, Evo-MARL integrates evolutionary search with parameter-sharing reinforcement learning to co-evolve attackers and defenders. This adversarial training paradigm internalizes safety mechanisms and continually enhances MAS performance under co-evolving threats. Experiments show that Evo-MARL reduces attack success rates by up to 22% while boosting accuracy by up to 5% on reasoning tasks-demonstrating that safety and utility can be jointly improved.
Problem

Research questions and friction points this paper is trying to address.

Enhance multi-agent system safety without external guards
Prevent single-point failure in safety mechanisms
Improve both safety and performance via co-evolution
Innovation

Methods, ideas, or system contributions that make the work stand out.

Internalizes safety via co-evolutionary MARL training
Integrates evolutionary search with parameter-sharing RL
Enhances both safety and task performance jointly
🔎 Similar Papers
No similar papers found.