🤖 AI Summary
Existing defenses struggle to ensure sustained safety against progressive jailbreaking attacks in multi-turn dialogues, primarily due to context drift. Method: This paper proposes a dialogue dynamic guidance framework grounded in safety control theory, introducing the novel Neural Barrier Function (NBF)—a learnable, real-time safety constraint model operating over the evolving dialogue state space. The framework integrates state-space modeling, safety predictor learning, and multi-turn adversarial query modeling to enforce invariant safety guarantees at every turn. Contribution/Results: Experiments demonstrate that our framework significantly outperforms state-of-the-art safety alignment methods across multiple mainstream LLMs. It achieves superior robustness against diverse multi-turn jailbreaking attacks while better balancing safety preservation and response utility.
📝 Abstract
Large language models (LLMs) are highly vulnerable to jailbreaking attacks, wherein adversarial prompts are designed to elicit harmful responses. While existing defenses effectively mitigate single-turn attacks by detecting and filtering unsafe inputs, they fail against multi-turn jailbreaks that exploit contextual drift over multiple interactions, gradually leading LLMs away from safe behavior. To address this challenge, we propose a safety steering framework grounded in safe control theory, ensuring invariant safety in multi-turn dialogues. Our approach models the dialogue with LLMs using state-space representations and introduces a novel neural barrier function (NBF) to detect and filter harmful queries emerging from evolving contexts proactively. Our method achieves invariant safety at each turn of dialogue by learning a safety predictor that accounts for adversarial queries, preventing potential context drift toward jailbreaks. Extensive experiments under multiple LLMs show that our NBF-based safety steering outperforms safety alignment baselines, offering stronger defenses against multi-turn jailbreaks while maintaining a better trade-off between safety and helpfulness under different multi-turn jailbreak methods. Our code is available at https://github.com/HanjiangHu/NBF-LLM .