You Didn't Have to Say It like That: Subliminal Learning from Faithful Paraphrases

📅 2026-03-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether language models implicitly acquire behavioral preferences from teacher models when trained on semantically faithful yet stylistically varied natural language paraphrases, even when the paraphrased content is unrelated to—or explicitly contradicts—those preferences. By prompting teacher models to express specific animal preferences and using the resulting high-fidelity paraphrases to train student models, the authors combine semantic irrelevance verification with quantitative preference evaluation to reveal, for the first time, a “sub-threshold learning” phenomenon in natural language paraphrasing: student models exhibit up to a 19-percentage-point increase in preference for the teacher’s favored animals. This finding challenges safety assumptions based solely on content moderation, demonstrating that behavioral tendencies can be transmitted without explicit cues and are resistant to fidelity-based filtering.

Technology Category

Application Category

📝 Abstract
When language models are trained on synthetic data, they (student model) can covertly acquire behavioral traits from the data-generating model (teacher model). Subliminal learning refers to the transmission of traits from a teacher to a student model via training on data unrelated to those traits. Prior work demonstrated this in the training domains of number sequences, code, and math Chain-of-Thought traces including transmission of misaligned behaviors. We investigate whether transmission occurs through natural language paraphrases with fixed semantic content, and whether content explicitly contradicting the teacher's preference can block it. We find that training on paraphrases from a teacher system-prompted to love a particular animal increases a student's preference for that animal by up to 19 percentage points. This occurs when paraphrased content is semantically unrelated to the animal, or even when it explicitly expresses dislike. The transmission succeeds despite aggressive filtering to ensure paraphrase fidelity. This raises concerns for pipelines where models generate their own training data: content-based inspection cannot detect such transmission, and even preference-contradicting content fails to prevent it.
Problem

Research questions and friction points this paper is trying to address.

subliminal learning
language models
synthetic data
behavioral traits
paraphrases
Innovation

Methods, ideas, or system contributions that make the work stand out.

subliminal learning
faithful paraphrases
behavioral transmission
synthetic training data
preference alignment
🔎 Similar Papers
No similar papers found.