🤖 AI Summary
This work introduces an imperceptible jailbreaking attack leveraging Unicode Variant Selectors (VS) to circumvent safety alignment mechanisms of large language models (LLMs) without altering the visual appearance of prompts. The method exploits VS characters to perturb tokenizer segmentation behavior, thereby inducing the model to generate harmful outputs; it employs chain-based search to optimize adversarial suffixes and embeds VS sequences as steganographic carriers within benign text. Crucially, this is the first systematic study to expose the semantic ambiguity of Unicode control characters—particularly VS—in LLM security boundaries, enabling high-success jailbreaking without modifying visible characters. Evaluated on four widely adopted aligned models—Llama-2-Chat, Qwen-1.5-Chat, Gemma-IT, and Phi-3—the attack achieves an average success rate exceeding 85%. The implementation is publicly released.
📝 Abstract
Jailbreaking attacks on the vision modality typically rely on imperceptible adversarial perturbations, whereas attacks on the textual modality are generally assumed to require visible modifications (e.g., non-semantic suffixes). In this paper, we introduce imperceptible jailbreaks that exploit a class of Unicode characters called variation selectors. By appending invisible variation selectors to malicious questions, the jailbreak prompts appear visually identical to original malicious questions on screen, while their tokenization is "secretly" altered. We propose a chain-of-search pipeline to generate such adversarial suffixes to induce harmful responses. Our experiments show that our imperceptible jailbreaks achieve high attack success rates against four aligned LLMs and generalize to prompt injection attacks, all without producing any visible modifications in the written prompt. Our code is available at https://github.com/sail-sg/imperceptible-jailbreaks.