đ¤ AI Summary
This work uncovers a paradox in large language models (LLMs): enhanced reasoning capabilitiesâspecifically, improved decoding of complex custom ciphersâcorrelate with increased vulnerability to novel cipher-induced jailbreaking attacks. To address this, we propose the first systematic framework for investigating this trade-off, comprising: (1) empirical validation of a positive correlation between reasoning ability and jailbreak success rate; (2) design of two cipher-based attack paradigmsâACE (single-layer cipher encoding) and LACE (multi-layer nested cipher encoding); and (3) construction of CipherBench, the first benchmark dedicated to evaluating cipher decoding proficiency. Experiments demonstrate that GPT-4o achieves a 78% jailbreak success rate under LACE attacks (40% under ACE), confirming that strong cipher-decoding capability significantly undermines safety alignment. Our findings introduce a new dimensionâreasoning-driven vulnerabilityâto LLM security assessment and provide empirical foundations for developing robust defenses against semantic obfuscationâbased adversarial attacks.
đ Abstract
Recent advancements in Large Language Model (LLM) safety have primarily focused on mitigating attacks crafted in natural language or common ciphers (e.g. Base64), which are likely integrated into newer models' safety training. However, we reveal a paradoxical vulnerability: as LLMs advance in reasoning, they inadvertently become more susceptible to novel jailbreaking attacks. Enhanced reasoning enables LLMs to interpret complex instructions and decode complex user-defined ciphers, creating an exploitable security gap. To study this vulnerability, we introduce Attacks using Custom Encryptions (ACE), a jailbreaking technique that encodes malicious queries with novel ciphers. Extending ACE, we introduce Layered Attacks using Custom Encryptions (LACE), which applies multi-layer ciphers to amplify attack complexity. Furthermore, we develop CipherBench, a benchmark designed to evaluate LLMs' accuracy in decoding encrypted benign text. Our experiments reveal a critical trade-off: LLMs that are more capable of decoding ciphers are more vulnerable to these jailbreaking attacks, with success rates on GPT-4o escalating from 40% under ACE to 78% with LACE. These findings highlight a critical insight: as LLMs become more adept at deciphering complex user ciphers--many of which cannot be preemptively included in safety training--they become increasingly exploitable.