Tree-based Dialogue Reinforced Policy Optimization for Red-Teaming Attacks

📅 2025-10-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current large language models (LLMs) are vulnerable to strategic adversarial attacks in multi-turn dialogues; however, prevailing approaches rely on manual red-teaming, predefined templates, or single-turn settings, failing to capture complex dialogue dynamics and long-horizon attack trajectories. This work proposes the first framework that formalizes multi-turn adversarial attack as a sequential decision-making problem, integrating online policy-based reinforcement learning with Monte Carlo Tree Search (MCTS) to enable end-to-end, annotation-free discovery of attack strategies. Its core innovation lies in a dialogue-policy-network-guided tree search mechanism that systematically explores the high-dimensional multi-turn attack space. Evaluated on ten mainstream target LLMs, our method achieves an attack success rate exceeding state-of-the-art baselines by 25.9%, while uncovering several novel multi-turn attack patterns—previously unreported in the literature.

Technology Category

Application Category

📝 Abstract
Despite recent rapid progress in AI safety, current large language models remain vulnerable to adversarial attacks in multi-turn interaction settings, where attackers strategically adapt their prompts across conversation turns and pose a more critical yet realistic challenge. Existing approaches that discover safety vulnerabilities either rely on manual red-teaming with human experts or employ automated methods using pre-defined templates and human-curated attack data, with most focusing on single-turn attacks. However, these methods did not explore the vast space of possible multi-turn attacks, failing to consider novel attack trajectories that emerge from complex dialogue dynamics and strategic conversation planning. This gap is particularly critical given recent findings that LLMs exhibit significantly higher vulnerability to multi-turn attacks compared to single-turn attacks. We propose DialTree-RPO, an on-policy reinforcement learning framework integrated with tree search that autonomously discovers diverse multi-turn attack strategies by treating the dialogue as a sequential decision-making problem, enabling systematic exploration without manually curated data. Through extensive experiments, our approach not only achieves more than 25.9% higher ASR across 10 target models compared to previous state-of-the-art approaches, but also effectively uncovers new attack strategies by learning optimal dialogue policies that maximize attack success across multiple turns.
Problem

Research questions and friction points this paper is trying to address.

Autonomously discovers multi-turn adversarial attack strategies
Addresses LLM vulnerabilities in sequential dialogue interactions
Explores novel attack trajectories without human-curated data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Tree search integrated reinforcement learning framework
Autonomously discovers multi-turn dialogue attack strategies
Systematic exploration without manually curated data
🔎 Similar Papers
No similar papers found.