Evil twins are not that evil: Qualitative insights into machine-generated prompts

📅 2024-12-11
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the interpretability foundations underlying language models’ (LMs) responses to algorithmically generated “autoprompt” inputs—particularly their exploitation in jailbreaking and other harmful behaviors. Method: We conduct token-level ablation experiments, cross-model comparisons across six architectures and scales, expert annotation, and consistency evaluation. Contribution/Results: We first identify stable, semantically critical tokens within autoprompts—demonstrating they are not purely black-box artifacts. We reveal a last-token dominance effect, show that 15–30% of tokens can be safely pruned without degrading functionality, and establish functional separation between filler words and keywords—whose semantic associations remain loose. Crucially, we empirically verify that autoprompts and natural language share common processing mechanisms in LMs. These findings indicate that autoprompt “incomprehensibility” follows structured, interpretable patterns—providing both theoretical grounding and empirical support for developing robust defenses against prompt-based jailbreaking and enhancing prompt robustness.

Technology Category

Application Category

📝 Abstract
It has been widely observed that language models (LMs) respond in predictable ways to algorithmically generated prompts that are seemingly unintelligible. This is both a sign that we lack a full understanding of how LMs work, and a practical challenge, because opaqueness can be exploited for harmful uses of LMs, such as jailbreaking. We present the first thorough analysis of opaque machine-generated prompts, or autoprompts, pertaining to 6 LMs of different sizes and families. We find that machine-generated prompts are characterized by a last token that is often intelligible and strongly affects the generation. A small but consistent proportion of the previous tokens are prunable, probably appearing in the prompt as a by-product of the fact that the optimization process fixes the number of tokens. The remaining tokens fall into two categories: filler tokens, which can be replaced with semantically unrelated substitutes, and keywords, that tend to have at least a loose semantic relation with the generation, although they do not engage in well-formed syntactic relations with it. Additionally, human experts can reliably identify the most influential tokens in an autoprompt a posteriori, suggesting these prompts are not entirely opaque. Finally, some of the ablations we applied to autoprompts yield similar effects in natural language inputs, suggesting that autoprompts emerge naturally from the way LMs process linguistic inputs in general.
Problem

Research questions and friction points this paper is trying to address.

Analyzing opaque machine-generated prompts in language models
Identifying influential tokens in autoprompts for model behavior
Exploring the natural emergence of autoprompts in LM processing
Innovation

Methods, ideas, or system contributions that make the work stand out.

Analyzes machine-generated prompts across diverse LMs
Identifies influential tokens including keywords and fillers
Links autoprompts to general LM linguistic processing
🔎 Similar Papers
No similar papers found.