Steering Dialogue Dynamics for Robustness against Multi-turn Jailbreaking Attacks

📅 2025-02-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing defenses struggle to ensure sustained safety against progressive jailbreaking attacks in multi-turn dialogues, primarily due to context drift. Method: This paper proposes a dialogue dynamic guidance framework grounded in safety control theory, introducing the novel Neural Barrier Function (NBF)—a learnable, real-time safety constraint model operating over the evolving dialogue state space. The framework integrates state-space modeling, safety predictor learning, and multi-turn adversarial query modeling to enforce invariant safety guarantees at every turn. Contribution/Results: Experiments demonstrate that our framework significantly outperforms state-of-the-art safety alignment methods across multiple mainstream LLMs. It achieves superior robustness against diverse multi-turn jailbreaking attacks while better balancing safety preservation and response utility.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) are highly vulnerable to jailbreaking attacks, wherein adversarial prompts are designed to elicit harmful responses. While existing defenses effectively mitigate single-turn attacks by detecting and filtering unsafe inputs, they fail against multi-turn jailbreaks that exploit contextual drift over multiple interactions, gradually leading LLMs away from safe behavior. To address this challenge, we propose a safety steering framework grounded in safe control theory, ensuring invariant safety in multi-turn dialogues. Our approach models the dialogue with LLMs using state-space representations and introduces a novel neural barrier function (NBF) to detect and filter harmful queries emerging from evolving contexts proactively. Our method achieves invariant safety at each turn of dialogue by learning a safety predictor that accounts for adversarial queries, preventing potential context drift toward jailbreaks. Extensive experiments under multiple LLMs show that our NBF-based safety steering outperforms safety alignment baselines, offering stronger defenses against multi-turn jailbreaks while maintaining a better trade-off between safety and helpfulness under different multi-turn jailbreak methods. Our code is available at https://github.com/HanjiangHu/NBF-LLM .
Problem

Research questions and friction points this paper is trying to address.

Address vulnerability of LLMs to multi-turn jailbreaking attacks
Develop a framework for invariant safety in multi-turn dialogues
Propose a neural barrier function to detect harmful queries
Innovation

Methods, ideas, or system contributions that make the work stand out.

State-space modeling for dialogue dynamics
Neural barrier function for harmful query detection
Safety predictor to prevent context drift
🔎 Similar Papers
No similar papers found.