π€ AI Summary
This work addresses the inefficiency and susceptibility to suboptimal solutions in existing multi-turn jailbreaking attacks against large language models, which stem from their reliance on sequential, turn-by-turn interaction to construct context. To overcome these limitations, the authors propose the ICON framework, which introduces an innovative intent-context coupling mechanism. ICON leverages prior-guided semantic routing to rapidly generate authoritative-style contextual content and employs a hierarchical strategy combining local prompt optimization with global context switching to substantially weaken the modelβs safety constraints. Experimental results demonstrate that ICON achieves an average attack success rate of 97.1% across eight mainstream large language models, significantly outperforming current state-of-the-art methods and underscoring the critical role of semantically coherent context and intent alignment in enabling efficient jailbreaking.
π Abstract
Multi-turn jailbreak attacks have emerged as a critical threat to Large Language Models (LLMs), bypassing safety mechanisms by progressively constructing adversarial contexts from scratch and incrementally refining prompts. However, existing methods suffer from the inefficiency of incremental context construction that requires step-by-step LLM interaction, and often stagnate in suboptimal regions due to surface-level optimization. In this paper, we characterize the Intent-Context Coupling phenomenon, revealing that LLM safety constraints are significantly relaxed when a malicious intent is coupled with a semantically congruent context pattern. Driven by this insight, we propose ICON, an automated multi-turn jailbreak framework that efficiently constructs an authoritative-style context via prior-guided semantic routing. Specifically, ICON first routes the malicious intent to a congruent context pattern (e.g., Scientific Research) and instantiates it into an attack prompt sequence. This sequence progressively builds the authoritative-style context and ultimately elicits prohibited content. In addition, ICON incorporates a Hierarchical Optimization Strategy that combines local prompt refinement with global context switching, preventing the attack from stagnating in ineffective contexts. Experimental results across eight SOTA LLMs demonstrate the effectiveness of ICON, achieving a state-of-the-art average Attack Success Rate (ASR) of 97.1\%. Code is available at https://github.com/xwlin-roy/ICON.