When"Competency"in Reasoning Opens the Door to Vulnerability: Jailbreaking LLMs via Novel Complex Ciphers

📅 2024-02-16
📈 Citations: 16
✨ Influential: 3
📄 PDF
🤖 AI Summary
This work uncovers a paradox in large language models (LLMs): enhanced reasoning capabilities—specifically, improved decoding of complex custom ciphers—correlate with increased vulnerability to novel cipher-induced jailbreaking attacks. To address this, we propose the first systematic framework for investigating this trade-off, comprising: (1) empirical validation of a positive correlation between reasoning ability and jailbreak success rate; (2) design of two cipher-based attack paradigms—ACE (single-layer cipher encoding) and LACE (multi-layer nested cipher encoding); and (3) construction of CipherBench, the first benchmark dedicated to evaluating cipher decoding proficiency. Experiments demonstrate that GPT-4o achieves a 78% jailbreak success rate under LACE attacks (40% under ACE), confirming that strong cipher-decoding capability significantly undermines safety alignment. Our findings introduce a new dimension—reasoning-driven vulnerability—to LLM security assessment and provide empirical foundations for developing robust defenses against semantic obfuscation–based adversarial attacks.

Technology Category

Application Category

📝 Abstract
Recent advancements in Large Language Model (LLM) safety have primarily focused on mitigating attacks crafted in natural language or common ciphers (e.g. Base64), which are likely integrated into newer models' safety training. However, we reveal a paradoxical vulnerability: as LLMs advance in reasoning, they inadvertently become more susceptible to novel jailbreaking attacks. Enhanced reasoning enables LLMs to interpret complex instructions and decode complex user-defined ciphers, creating an exploitable security gap. To study this vulnerability, we introduce Attacks using Custom Encryptions (ACE), a jailbreaking technique that encodes malicious queries with novel ciphers. Extending ACE, we introduce Layered Attacks using Custom Encryptions (LACE), which applies multi-layer ciphers to amplify attack complexity. Furthermore, we develop CipherBench, a benchmark designed to evaluate LLMs' accuracy in decoding encrypted benign text. Our experiments reveal a critical trade-off: LLMs that are more capable of decoding ciphers are more vulnerable to these jailbreaking attacks, with success rates on GPT-4o escalating from 40% under ACE to 78% with LACE. These findings highlight a critical insight: as LLMs become more adept at deciphering complex user ciphers--many of which cannot be preemptively included in safety training--they become increasingly exploitable.
Problem

Research questions and friction points this paper is trying to address.

LLMs' enhanced reasoning increases vulnerability to novel jailbreaking attacks
Custom cipher attacks exploit LLMs' ability to decode complex instructions
Advanced cipher decoding in LLMs creates a security-performance trade-off
Innovation

Methods, ideas, or system contributions that make the work stand out.

ACE: Jailbreaking via novel custom ciphers
LACE: Multi-layer cipher attacks boost success
CipherBench: Benchmark for cipher decoding accuracy
🔎 Similar Papers
No similar papers found.