Three Minds, One Legend: Jailbreak Large Reasoning Model with Adaptive Stacked Ciphers

📅 2025-05-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large Reasoning Models (LRMs) exhibit strong logical reasoning capabilities that may exacerbate security risks; however, existing jailbreaking methods struggle to balance effectiveness with robustness against dynamic alignment mechanisms. Method: We propose SEAL, a novel adversarial attack framework featuring a stacked variable-cipher structure and a dual-mode dynamic perturbation strategy (random and adaptive), which systematically disrupts LRMs’ reasoning chains via multi-cipher fusion encryption, dynamic length/sequence scheduling, and inference-path perturbation injection—enabling evasion of dynamic safety guards. Contribution/Results: We further design a cross-model security evaluation protocol to assess generalizability. SEAL achieves an 80.8% attack success rate on GPT-4-mini, outperforming state-of-the-art baselines by 27.2%, and demonstrates consistent efficacy across diverse LRMs including DeepSeek-R1 and Claude Sonnet.

Technology Category

Application Category

📝 Abstract
Recently, Large Reasoning Models (LRMs) have demonstrated superior logical capabilities compared to traditional Large Language Models (LLMs), gaining significant attention. Despite their impressive performance, the potential for stronger reasoning abilities to introduce more severe security vulnerabilities remains largely underexplored. Existing jailbreak methods often struggle to balance effectiveness with robustness against adaptive safety mechanisms. In this work, we propose SEAL, a novel jailbreak attack that targets LRMs through an adaptive encryption pipeline designed to override their reasoning processes and evade potential adaptive alignment. Specifically, SEAL introduces a stacked encryption approach that combines multiple ciphers to overwhelm the models reasoning capabilities, effectively bypassing built-in safety mechanisms. To further prevent LRMs from developing countermeasures, we incorporate two dynamic strategies - random and adaptive - that adjust the cipher length, order, and combination. Extensive experiments on real-world reasoning models, including DeepSeek-R1, Claude Sonnet, and OpenAI GPT-o4, validate the effectiveness of our approach. Notably, SEAL achieves an attack success rate of 80.8% on GPT o4-mini, outperforming state-of-the-art baselines by a significant margin of 27.2%. Warning: This paper contains examples of inappropriate, offensive, and harmful content.
Problem

Research questions and friction points this paper is trying to address.

Exploring security vulnerabilities in Large Reasoning Models (LRMs)
Balancing jailbreak effectiveness and robustness against safety mechanisms
Developing adaptive encryption to bypass LRMs' reasoning and alignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive stacked ciphers bypass safety mechanisms
Dynamic strategies adjust cipher parameters
High success rate on multiple LRMs
🔎 Similar Papers
No similar papers found.