🤖 AI Summary
Large language models (LLMs) face persistent safety alignment challenges in open-web environments, where existing work predominantly focuses on static adversarial settings and neglects the dynamic co-evolution between emerging threats and adaptive defenses. To address this, we propose ACE-Safety—a novel framework featuring a synergistic attack-defense co-evolution mechanism. It employs Group-aware Monte Carlo Tree Search (MCTS) to generate diverse jailbreaking samples guided by group-perception policies, and integrates Adversarial Curriculum-based Tree-aware Group Policy Optimization (ACT-GPO) for adversarial reinforcement learning. This enables joint evolution and mutual capability enhancement of both attackers and defenders within complex semantic spaces. Evaluated across multiple benchmarks, ACE-Safety significantly improves jailbreak detection rates and enhances model robustness against adversarial attacks. Our approach establishes a new paradigm for building sustainable, self-adaptive, and trustworthy AI safety ecosystems.
📝 Abstract
Large Language Models (LLMs) have developed rapidly in web services, delivering unprecedented capabilities while amplifying societal risks. Existing works tend to focus on either isolated jailbreak attacks or static defenses, neglecting the dynamic interplay between evolving threats and safeguards in real-world web contexts. To mitigate these challenges, we propose ACE-Safety (Adversarial Co-Evolution for LLM Safety), a novel framework that jointly optimize attack and defense models by seamlessly integrating two key innovative procedures: (1) Group-aware Strategy-guided Monte Carlo Tree Search (GS-MCTS), which efficiently explores jailbreak strategies to uncover vulnerabilities and generate diverse adversarial samples; (2) Adversarial Curriculum Tree-aware Group Policy Optimization (AC-TGPO), which jointly trains attack and defense LLMs with challenging samples via curriculum reinforcement learning, enabling robust mutual improvement. Evaluations across multiple benchmarks demonstrate that our method outperforms existing attack and defense approaches, and provides a feasible pathway for developing LLMs that can sustainably support responsible AI ecosystems.