Adversarial Attack-Defense Co-Evolution for LLM Safety Alignment via Tree-Group Dual-Aware Search and Optimization

📅 2025-11-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) face persistent safety alignment challenges in open-web environments, where existing work predominantly focuses on static adversarial settings and neglects the dynamic co-evolution between emerging threats and adaptive defenses. To address this, we propose ACE-Safety—a novel framework featuring a synergistic attack-defense co-evolution mechanism. It employs Group-aware Monte Carlo Tree Search (MCTS) to generate diverse jailbreaking samples guided by group-perception policies, and integrates Adversarial Curriculum-based Tree-aware Group Policy Optimization (ACT-GPO) for adversarial reinforcement learning. This enables joint evolution and mutual capability enhancement of both attackers and defenders within complex semantic spaces. Evaluated across multiple benchmarks, ACE-Safety significantly improves jailbreak detection rates and enhances model robustness against adversarial attacks. Our approach establishes a new paradigm for building sustainable, self-adaptive, and trustworthy AI safety ecosystems.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have developed rapidly in web services, delivering unprecedented capabilities while amplifying societal risks. Existing works tend to focus on either isolated jailbreak attacks or static defenses, neglecting the dynamic interplay between evolving threats and safeguards in real-world web contexts. To mitigate these challenges, we propose ACE-Safety (Adversarial Co-Evolution for LLM Safety), a novel framework that jointly optimize attack and defense models by seamlessly integrating two key innovative procedures: (1) Group-aware Strategy-guided Monte Carlo Tree Search (GS-MCTS), which efficiently explores jailbreak strategies to uncover vulnerabilities and generate diverse adversarial samples; (2) Adversarial Curriculum Tree-aware Group Policy Optimization (AC-TGPO), which jointly trains attack and defense LLMs with challenging samples via curriculum reinforcement learning, enabling robust mutual improvement. Evaluations across multiple benchmarks demonstrate that our method outperforms existing attack and defense approaches, and provides a feasible pathway for developing LLMs that can sustainably support responsible AI ecosystems.
Problem

Research questions and friction points this paper is trying to address.

Addresses dynamic interplay between evolving threats and safeguards in LLMs
Mitigates societal risks from jailbreak attacks through co-evolution framework
Overcomes limitations of isolated attack or static defense approaches
Innovation

Methods, ideas, or system contributions that make the work stand out.

Group-aware Monte Carlo Tree Search explores vulnerabilities
Curriculum reinforcement learning trains attack and defense models
Tree-group dual optimization enables robust mutual improvement
🔎 Similar Papers
No similar papers found.