🤖 AI Summary
Conventional wisdom treats large language models’ (LLMs) responses to jailbreak prompts—often non-human-readable—as alignment failures. This work challenges that view, arguing instead that such “non-natural language” outputs constitute latent representations encoding transferable semantic features.
Method: We propose the first systematic instruction-driven framework for generating non-natural language, integrating multi-model generalization analysis, noise-robust representation modeling, and context-aware semantic inference to assess cross-model and cross-task consistency.
Contribution/Results: Experiments on Length-controlled AlpacaEval 2.0 show that models fine-tuned exclusively on non-natural language achieve a mean win rate of 49.71%, matching performance of natural-language fine-tuning. This demonstrates that non-natural language serves as an effective, generalizable, and efficient instruction representation paradigm—validating its semantic fidelity, reusability, and robustness across diverse LLMs and downstream tasks.
📝 Abstract
Large Language Models (LLMs) have been observed to process non-human-readable text sequences, such as jailbreak prompts, often viewed as a bug for aligned LLMs. In this work, we present a systematic investigation challenging this perception, demonstrating that unnatural languages - strings that appear incomprehensible to humans but maintain semantic meanings for LLMs - contain latent features usable by models. Notably, unnatural languages possess latent features that can be generalized across different models and tasks during inference. Furthermore, models fine-tuned on unnatural versions of instruction datasets perform on-par with those trained on natural language, achieving 49.71 win rates in Length-controlled AlpacaEval 2.0 in average across various base models. In addition, through comprehensive analysis, we demonstrate that LLMs process unnatural languages by filtering noise and inferring contextual meaning from filtered words.