🤖 AI Summary
This work reveals a previously unexplored vulnerability in large language models (LLMs): their susceptibility to semantic disruption caused by spatial rearrangements of tokens. The authors propose a novel jailbreaking attack that exploits the autoregressive generation mechanism to reposition tokens—such as along rows, columns, or diagonals—thereby constructing adversarial text structures that impair the model’s semantic comprehension and bypass safety filters. This approach achieves near-perfect attack success rates on mainstream LLMs and maintains over 75% effectiveness even against advanced safeguards like the OpenAI Moderation API, substantially outperforming existing methods. The study also initiates exploration into potential defense strategies against such spatially induced semantic vulnerabilities.
📝 Abstract
While Large Language Models (LLMs) have powerful capabilities, they remain vulnerable to jailbreak attacks, which is a critical barrier to their safe web real-time application. Current commercial LLM providers deploy output guardrails to filter harmful outputs, yet these defenses are not impenetrable. Due to LLMs'reliance on autoregressive, token-by-token inference, their semantic representations lack robustness to spatially structured perturbations, such as redistributing tokens across different rows, columns, or diagonals. Exploiting the Transformer's spatial weakness, we propose SpatialJB to disrupt the model's output generation process, allowing harmful content to bypass guardrails without detection. Comprehensive experiments conducted on leading LLMs get nearly 100% ASR, demonstrating the high effectiveness of SpatialJB. Even after adding advanced output guardrails, like the OpenAI Moderation API, SpatialJB consistently maintains a success rate exceeding 75%, outperforming current jailbreak techniques by a significant margin. The proposal of SpatialJB exposes a key weakness in current guardrails and emphasizes the importance of spatial semantics, offering new insights to advance LLM safety research. To prevent potential misuse, we also present baseline defense strategies against SpatialJB and evaluate their effectiveness in mitigating such attacks. The code for the attack, baseline defenses, and a demo are available at https://anonymous.4open.science/r/SpatialJailbreak-8E63.