SpatialJB: How Text Distribution Art Becomes the"Jailbreak Key"for LLM Guardrails

📅 2026-01-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work reveals a previously unexplored vulnerability in large language models (LLMs): their susceptibility to semantic disruption caused by spatial rearrangements of tokens. The authors propose a novel jailbreaking attack that exploits the autoregressive generation mechanism to reposition tokens—such as along rows, columns, or diagonals—thereby constructing adversarial text structures that impair the model’s semantic comprehension and bypass safety filters. This approach achieves near-perfect attack success rates on mainstream LLMs and maintains over 75% effectiveness even against advanced safeguards like the OpenAI Moderation API, substantially outperforming existing methods. The study also initiates exploration into potential defense strategies against such spatially induced semantic vulnerabilities.

Technology Category

Application Category

📝 Abstract
While Large Language Models (LLMs) have powerful capabilities, they remain vulnerable to jailbreak attacks, which is a critical barrier to their safe web real-time application. Current commercial LLM providers deploy output guardrails to filter harmful outputs, yet these defenses are not impenetrable. Due to LLMs'reliance on autoregressive, token-by-token inference, their semantic representations lack robustness to spatially structured perturbations, such as redistributing tokens across different rows, columns, or diagonals. Exploiting the Transformer's spatial weakness, we propose SpatialJB to disrupt the model's output generation process, allowing harmful content to bypass guardrails without detection. Comprehensive experiments conducted on leading LLMs get nearly 100% ASR, demonstrating the high effectiveness of SpatialJB. Even after adding advanced output guardrails, like the OpenAI Moderation API, SpatialJB consistently maintains a success rate exceeding 75%, outperforming current jailbreak techniques by a significant margin. The proposal of SpatialJB exposes a key weakness in current guardrails and emphasizes the importance of spatial semantics, offering new insights to advance LLM safety research. To prevent potential misuse, we also present baseline defense strategies against SpatialJB and evaluate their effectiveness in mitigating such attacks. The code for the attack, baseline defenses, and a demo are available at https://anonymous.4open.science/r/SpatialJailbreak-8E63.
Problem

Research questions and friction points this paper is trying to address.

jailbreak attacks
LLM guardrails
spatial perturbations
output filtering
model robustness
Innovation

Methods, ideas, or system contributions that make the work stand out.

SpatialJB
jailbreak attack
spatial semantics
LLM guardrails
token redistribution
🔎 Similar Papers
No similar papers found.
Z
Zhiyi Mou
Zhejiang University
J
Jingyuan Yang
Zhejiang University
Z
Zeheng Qian
The University of Sydney
W
Wangze Ni
Zhejiang University
T
Tianfang Xiao
Sun Yat-sen University
N
Ning Liu
Zhejiang University
Chen Zhang
Chen Zhang
The University of Hong Kong
Statistical machine learningNonparametric methods
Zhan Qin
Zhan Qin
Researcher, Zhejiang University
Data Security and PrivacyAI Security
Kui Ren
Kui Ren
Professor and Dean of Computer Science, Zhejiang University, ACM/IEEE Fellow
Data Security & PrivacyAI SecurityIoT & Vehicular Security