Bidirectional Intention Inference Enhances LLMs' Defense Against Multi-Turn Jailbreak Attacks

📅 2025-09-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing LLM safety defenses primarily target single-turn jailbreaking attacks and struggle to mitigate multi-turn adversarial scenarios where malicious intent evolves covertly and progresses incrementally across turns. To address this, we propose a Bidirectional Intent Inference (BII) defense framework. BII innovatively integrates forward-looking request analysis with backward-looking response traceback, jointly leveraging multi-turn contextual modeling and risk consistency verification to dynamically track and jointly infer latent malicious intent. Extensive experiments on three mainstream LLMs and two security benchmarks demonstrate that BII significantly reduces multi-turn attack success rates—by an average of 42.6% compared to seven state-of-the-art defenses—while preserving high user usability. Notably, BII achieves new state-of-the-art robustness in multi-turn safety defense, establishing a principled approach for detecting evolving adversarial behavior across conversational turns.

Technology Category

Application Category

📝 Abstract
The remarkable capabilities of Large Language Models (LLMs) have raised significant safety concerns, particularly regarding "jailbreak" attacks that exploit adversarial prompts to bypass safety alignment mechanisms. Existing defense research primarily focuses on single-turn attacks, whereas multi-turn jailbreak attacks progressively break through safeguards through by concealing malicious intent and tactical manipulation, ultimately rendering conventional single-turn defenses ineffective. To address this critical challenge, we propose the Bidirectional Intention Inference Defense (BIID). The method integrates forward request-based intention inference with backward response-based intention retrospection, establishing a bidirectional synergy mechanism to detect risks concealed within seemingly benign inputs, thereby constructing a more robust guardrails that effectively prevents harmful content generation. The proposed method undergoes systematic evaluation compared with a no-defense baseline and seven representative defense methods across three LLMs and two safety benchmarks under 10 different attack methods. Experimental results demonstrate that the proposed method significantly reduces the Attack Success Rate (ASR) across both single-turn and multi-turn jailbreak attempts, outperforming all existing baseline methods while effectively maintaining practical utility. Notably, comparative experiments across three multi-turn safety datasets further validate the proposed model's significant advantages over other defense approaches.
Problem

Research questions and friction points this paper is trying to address.

Defends against multi-turn jailbreak attacks on LLMs
Detects concealed malicious intent through bidirectional inference
Reduces attack success rate while maintaining model utility
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bidirectional intention inference detects hidden risks
Forward and backward analysis prevents harmful content
Synergy mechanism enhances defense against jailbreak attacks
🔎 Similar Papers
No similar papers found.
H
Haibo Tong
Brain-inspired Cognitive AI Lab, Institute of Automation, Chinese Academy of Sciences, China; University of Chinese Academy of Sciences, China
Dongcheng Zhao
Dongcheng Zhao
Beijing Institute of AI Safety and Governance
Spiking Neural NetworksEvent Based VisionBrain-inspired AILLM Safety
G
Guobin Shen
Brain-inspired Cognitive AI Lab, Institute of Automation, Chinese Academy of Sciences, China; University of Chinese Academy of Sciences, China
X
Xiang He
Brain-inspired Cognitive AI Lab, Institute of Automation, Chinese Academy of Sciences, China; University of Chinese Academy of Sciences, China
D
Dachuan Lin
Brain-inspired Cognitive AI Lab, Institute of Automation, Chinese Academy of Sciences, China
F
Feifei Zhao
Beijing Key Laboratory of Safe AI and Superalignment, China; Beijing Institute of AI Safety and Governance, China; Brain-inspired Cognitive AI Lab, Institute of Automation, Chinese Academy of Sciences, China; Long-term AI, China
Y
Yi Zeng
Beijing Key Laboratory of Safe AI and Superalignment, China; Beijing Institute of AI Safety and Governance, China; Brain-inspired Cognitive AI Lab, Institute of Automation, Chinese Academy of Sciences, China; University of Chinese Academy of Sciences, China; Long-term AI, China