🤖 AI Summary
This work identifies a novel jailbreaking attack surface in large language models (LLMs): special tokens can be maliciously exploited to circumvent both internal safety alignment mechanisms and external content moderation systems. We propose a jailbreaking method leveraging metadata properties of special tokens—specifically, their semantic similarity in the token embedding space—to enable implicit, input-cleaning-resistant substitution without modifying model parameters. Evaluated on both lab settings and mainstream commercial platforms (e.g., GPT-4, Claude), our approach achieves state-of-the-art jailbreak success rates under no-moderation conditions, and outperforms PAP and GPTFuzzer by 11.6% and 34.8%, respectively, under active moderation—further improving when combined with either baseline. This is the first systematic study to expose metadata-level security vulnerabilities inherent in special tokens, establishing a new evaluation dimension for robust alignment and offering concrete defensive insights.
📝 Abstract
Unlike regular tokens derived from existing text corpora, special tokens are artificially created to annotate structured conversations during the fine-tuning process of Large Language Models (LLMs). Serving as metadata of training data, these tokens play a crucial role in instructing LLMs to generate coherent and context-aware responses. We demonstrate that special tokens can be exploited to construct four attack primitives, with which malicious users can reliably bypass the internal safety alignment of online LLM services and circumvent state-of-the-art (SOTA) external content moderation systems simultaneously. Moreover, we found that addressing this threat is challenging, as aggressive defense mechanisms-such as input sanitization by removing special tokens entirely, as suggested in academia-are less effective than anticipated. This is because such defense can be evaded when the special tokens are replaced by regular ones with high semantic similarity within the tokenizer's embedding space. We systemically evaluated our method, named MetaBreak, on both lab environment and commercial LLM platforms. Our approach achieves jailbreak rates comparable to SOTA prompt-engineering-based solutions when no content moderation is deployed. However, when there is content moderation, MetaBreak outperforms SOTA solutions PAP and GPTFuzzer by 11.6% and 34.8%, respectively. Finally, since MetaBreak employs a fundamentally different strategy from prompt engineering, the two approaches can work synergistically. Notably, empowering MetaBreak on PAP and GPTFuzzer boosts jailbreak rates by 24.3% and 20.2%, respectively.