🤖 AI Summary
Multi-round jailbreak attacks (e.g., Crescendo) pose a critical threat to advanced large language models (LLMs) and their safety alignment mechanisms, exposing fundamental limitations in existing single-turn defenses due to poor generalization across interaction rounds.
Method: We adopt a representation engineering perspective, combining intermediate-layer representation analysis, black-box attack experiments, and multi-turn dialogue trajectory tracking to characterize how attackers induce gradual semantic drift across successive interactions—steering model outputs to persist within “benign” regions of the representation space and thereby evade safety alignment.
Contribution/Results: Empirical evaluation demonstrates that attack success rates increase monotonically with dialogue turns, revealing an inherent flaw in single-turn detection paradigms. This work provides the first explanation of multi-round jailbreaking through the lens of dynamic evolution in representation space, establishing a novel theoretical foundation and technical pathway for developing robust, temporally aware defense systems.
📝 Abstract
Recent research has demonstrated that state-of-the-art LLMs and defenses remain susceptible to multi-turn jailbreak attacks. These attacks require only closed-box model access and are often easy to perform manually, posing a significant threat to the safe and secure deployment of LLM-based systems. We study the effectiveness of the Crescendo multi-turn jailbreak at the level of intermediate model representations and find that safety-aligned LMs often represent Crescendo responses as more benign than harmful, especially as the number of conversation turns increases. Our analysis indicates that at each turn, Crescendo prompts tend to keep model outputs in a "benign" region of representation space, effectively tricking the model into fulfilling harmful requests. Further, our results help explain why single-turn jailbreak defenses like circuit breakers are generally ineffective against multi-turn attacks, motivating the development of mitigations that address this generalization gap.