MAGIC: A Co-Evolving Attacker-Defender Adversarial Game for Robust LLM Safety

📅 2026-02-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes a multi-round, multi-agent reinforcement learning framework to address the limitations of existing safety alignment methods for large language models, which rely on static data and struggle against dynamically evolving adversarial attacks. The approach formulates safety alignment as an asymmetric red-teaming game: attackers dynamically generate and compose novel long-tail adversarial prompts, while defenders concurrently optimize refusal strategies, enabling co-evolution of both agents. By integrating adversarial prompt rewriting, dynamic game-theoretic equilibrium analysis, and alignment policy optimization, the framework significantly enhances defense success rates against unseen attacks without compromising model utility. Empirical results demonstrate the robustness and generalization capability of the proposed method.

Technology Category

Application Category

📝 Abstract
Ensuring robust safety alignment is crucial for Large Language Models (LLMs), yet existing defenses often lag behind evolving adversarial attacks due to their \textbf{reliance on static, pre-collected data distributions}. In this paper, we introduce \textbf{MAGIC}, a novel multi-turn multi-agent reinforcement learning framework that formulates LLM safety alignment as an adversarial asymmetric game. Specifically, an attacker agent learns to iteratively rewrite original queries into deceptive prompts, while a defender agent simultaneously optimizes its policy to recognize and refuse such inputs. This dynamic process triggers a \textbf{co-evolution}, where the attacker's ever-changing strategies continuously uncover long-tail vulnerabilities, driving the defender to generalize to unseen attack patterns. Remarkably, we observe that the attacker, endowed with initial reasoning ability, evolves \textbf{novel, previously unseen combinatorial strategies} through iterative RL training, underscoring our method's substantial potential. Theoretically, we provide insights into a more robust game equilibrium and derive safety guarantees. Extensive experiments validate our framework's effectiveness, demonstrating superior defense success rates without compromising the helpfulness of the model. Our code is available at https://github.com/BattleWen/MAGIC.
Problem

Research questions and friction points this paper is trying to address.

LLM safety
adversarial attacks
static data distribution
safety alignment
robustness
Innovation

Methods, ideas, or system contributions that make the work stand out.

co-evolution
adversarial game
multi-agent reinforcement learning
LLM safety alignment
combinatorial attack strategies
🔎 Similar Papers
No similar papers found.