🤖 AI Summary
This work proposes a multi-round, multi-agent reinforcement learning framework to address the limitations of existing safety alignment methods for large language models, which rely on static data and struggle against dynamically evolving adversarial attacks. The approach formulates safety alignment as an asymmetric red-teaming game: attackers dynamically generate and compose novel long-tail adversarial prompts, while defenders concurrently optimize refusal strategies, enabling co-evolution of both agents. By integrating adversarial prompt rewriting, dynamic game-theoretic equilibrium analysis, and alignment policy optimization, the framework significantly enhances defense success rates against unseen attacks without compromising model utility. Empirical results demonstrate the robustness and generalization capability of the proposed method.
📝 Abstract
Ensuring robust safety alignment is crucial for Large Language Models (LLMs), yet existing defenses often lag behind evolving adversarial attacks due to their \textbf{reliance on static, pre-collected data distributions}. In this paper, we introduce \textbf{MAGIC}, a novel multi-turn multi-agent reinforcement learning framework that formulates LLM safety alignment as an adversarial asymmetric game. Specifically, an attacker agent learns to iteratively rewrite original queries into deceptive prompts, while a defender agent simultaneously optimizes its policy to recognize and refuse such inputs. This dynamic process triggers a \textbf{co-evolution}, where the attacker's ever-changing strategies continuously uncover long-tail vulnerabilities, driving the defender to generalize to unseen attack patterns. Remarkably, we observe that the attacker, endowed with initial reasoning ability, evolves \textbf{novel, previously unseen combinatorial strategies} through iterative RL training, underscoring our method's substantial potential. Theoretically, we provide insights into a more robust game equilibrium and derive safety guarantees. Extensive experiments validate our framework's effectiveness, demonstrating superior defense success rates without compromising the helpfulness of the model. Our code is available at https://github.com/BattleWen/MAGIC.